Oct 28 00:12:16.206445 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 22:07:42 -00 2025 Oct 28 00:12:16.206477 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bb8cbc137ff563234eef33bdd51a5c9ee67c90d62b83654276e2a4d312ac5ee1 Oct 28 00:12:16.206492 kernel: BIOS-provided physical RAM map: Oct 28 00:12:16.206502 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 28 00:12:16.206511 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 28 00:12:16.206520 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 28 00:12:16.206532 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 28 00:12:16.206541 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 28 00:12:16.206551 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 28 00:12:16.206563 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 28 00:12:16.206572 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 00:12:16.206581 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 28 00:12:16.206591 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 00:12:16.206600 kernel: NX (Execute Disable) protection: active Oct 28 00:12:16.206614 kernel: APIC: Static calls initialized Oct 28 00:12:16.206624 kernel: SMBIOS 2.8 present. Oct 28 00:12:16.206635 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 28 00:12:16.206645 kernel: DMI: Memory slots populated: 1/1 Oct 28 00:12:16.206655 kernel: Hypervisor detected: KVM Oct 28 00:12:16.206666 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 28 00:12:16.206676 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 28 00:12:16.206686 kernel: kvm-clock: using sched offset of 3678853439 cycles Oct 28 00:12:16.206696 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 28 00:12:16.206707 kernel: tsc: Detected 2794.750 MHz processor Oct 28 00:12:16.206721 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 28 00:12:16.206732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 28 00:12:16.206743 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 28 00:12:16.206754 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 28 00:12:16.206764 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 28 00:12:16.206775 kernel: Using GB pages for direct mapping Oct 28 00:12:16.206786 kernel: ACPI: Early table checksum verification disabled Oct 28 00:12:16.206799 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 28 00:12:16.206809 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206820 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206831 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206841 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 28 00:12:16.206852 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206862 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206875 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206886 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 00:12:16.206901 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 28 00:12:16.206912 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 28 00:12:16.206923 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 28 00:12:16.206951 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 28 00:12:16.206962 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 28 00:12:16.206973 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 28 00:12:16.206984 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 28 00:12:16.206995 kernel: No NUMA configuration found Oct 28 00:12:16.207006 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 28 00:12:16.207020 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 28 00:12:16.207031 kernel: Zone ranges: Oct 28 00:12:16.207042 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 28 00:12:16.207053 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 28 00:12:16.207064 kernel: Normal empty Oct 28 00:12:16.207075 kernel: Device empty Oct 28 00:12:16.207086 kernel: Movable zone start for each node Oct 28 00:12:16.207096 kernel: Early memory node ranges Oct 28 00:12:16.207110 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 28 00:12:16.207121 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 28 00:12:16.207144 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 28 00:12:16.207156 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 00:12:16.207167 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 28 00:12:16.207179 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 28 00:12:16.207190 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 28 00:12:16.207201 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 28 00:12:16.207214 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 28 00:12:16.207225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 28 00:12:16.207251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 28 00:12:16.207262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 28 00:12:16.207273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 28 00:12:16.207284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 28 00:12:16.207295 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 28 00:12:16.207309 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 28 00:12:16.207320 kernel: TSC deadline timer available Oct 28 00:12:16.207331 kernel: CPU topo: Max. logical packages: 1 Oct 28 00:12:16.207342 kernel: CPU topo: Max. logical dies: 1 Oct 28 00:12:16.207353 kernel: CPU topo: Max. dies per package: 1 Oct 28 00:12:16.207364 kernel: CPU topo: Max. threads per core: 1 Oct 28 00:12:16.207375 kernel: CPU topo: Num. cores per package: 4 Oct 28 00:12:16.207388 kernel: CPU topo: Num. threads per package: 4 Oct 28 00:12:16.207399 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 28 00:12:16.207410 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 28 00:12:16.207421 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 28 00:12:16.207432 kernel: kvm-guest: setup PV sched yield Oct 28 00:12:16.207443 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 28 00:12:16.207455 kernel: Booting paravirtualized kernel on KVM Oct 28 00:12:16.207466 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 28 00:12:16.207481 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 28 00:12:16.207492 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 28 00:12:16.207504 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 28 00:12:16.207515 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 28 00:12:16.207526 kernel: kvm-guest: PV spinlocks enabled Oct 28 00:12:16.207537 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 28 00:12:16.207550 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bb8cbc137ff563234eef33bdd51a5c9ee67c90d62b83654276e2a4d312ac5ee1 Oct 28 00:12:16.207564 kernel: random: crng init done Oct 28 00:12:16.207575 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 00:12:16.207586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 00:12:16.207597 kernel: Fallback order for Node 0: 0 Oct 28 00:12:16.207608 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 28 00:12:16.207619 kernel: Policy zone: DMA32 Oct 28 00:12:16.207630 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 00:12:16.207644 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 00:12:16.207654 kernel: ftrace: allocating 40092 entries in 157 pages Oct 28 00:12:16.207665 kernel: ftrace: allocated 157 pages with 5 groups Oct 28 00:12:16.207676 kernel: Dynamic Preempt: voluntary Oct 28 00:12:16.207686 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 00:12:16.207698 kernel: rcu: RCU event tracing is enabled. Oct 28 00:12:16.207709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 00:12:16.207723 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 00:12:16.207734 kernel: Rude variant of Tasks RCU enabled. Oct 28 00:12:16.207744 kernel: Tracing variant of Tasks RCU enabled. Oct 28 00:12:16.207755 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 00:12:16.207765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 00:12:16.207776 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 00:12:16.207787 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 00:12:16.207798 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 00:12:16.207812 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 28 00:12:16.207822 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 00:12:16.207841 kernel: Console: colour VGA+ 80x25 Oct 28 00:12:16.207854 kernel: printk: legacy console [ttyS0] enabled Oct 28 00:12:16.207865 kernel: ACPI: Core revision 20240827 Oct 28 00:12:16.207877 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 28 00:12:16.207952 kernel: APIC: Switch to symmetric I/O mode setup Oct 28 00:12:16.207967 kernel: x2apic enabled Oct 28 00:12:16.207979 kernel: APIC: Switched APIC routing to: physical x2apic Oct 28 00:12:16.207994 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 28 00:12:16.208005 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 28 00:12:16.208016 kernel: kvm-guest: setup PV IPIs Oct 28 00:12:16.208028 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 28 00:12:16.208042 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 28 00:12:16.208053 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 28 00:12:16.208064 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 28 00:12:16.208076 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 28 00:12:16.208088 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 28 00:12:16.208099 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 28 00:12:16.208111 kernel: Spectre V2 : Mitigation: Retpolines Oct 28 00:12:16.208125 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 28 00:12:16.208148 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 28 00:12:16.208160 kernel: active return thunk: retbleed_return_thunk Oct 28 00:12:16.208171 kernel: RETBleed: Mitigation: untrained return thunk Oct 28 00:12:16.208182 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 28 00:12:16.208194 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 28 00:12:16.208205 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 28 00:12:16.208221 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 28 00:12:16.208232 kernel: active return thunk: srso_return_thunk Oct 28 00:12:16.208244 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 28 00:12:16.208257 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 28 00:12:16.208268 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 28 00:12:16.208279 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 28 00:12:16.208293 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 28 00:12:16.208305 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 28 00:12:16.208316 kernel: Freeing SMP alternatives memory: 32K Oct 28 00:12:16.208327 kernel: pid_max: default: 32768 minimum: 301 Oct 28 00:12:16.208339 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 00:12:16.208350 kernel: landlock: Up and running. Oct 28 00:12:16.208361 kernel: SELinux: Initializing. Oct 28 00:12:16.208373 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 00:12:16.208387 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 00:12:16.208398 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 28 00:12:16.208410 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 28 00:12:16.208421 kernel: ... version: 0 Oct 28 00:12:16.208450 kernel: ... bit width: 48 Oct 28 00:12:16.208471 kernel: ... generic registers: 6 Oct 28 00:12:16.208483 kernel: ... value mask: 0000ffffffffffff Oct 28 00:12:16.208497 kernel: ... max period: 00007fffffffffff Oct 28 00:12:16.208508 kernel: ... fixed-purpose events: 0 Oct 28 00:12:16.208519 kernel: ... event mask: 000000000000003f Oct 28 00:12:16.208529 kernel: signal: max sigframe size: 1776 Oct 28 00:12:16.208541 kernel: rcu: Hierarchical SRCU implementation. Oct 28 00:12:16.208552 kernel: rcu: Max phase no-delay instances is 400. Oct 28 00:12:16.208563 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 00:12:16.208576 kernel: smp: Bringing up secondary CPUs ... Oct 28 00:12:16.208587 kernel: smpboot: x86: Booting SMP configuration: Oct 28 00:12:16.208598 kernel: .... node #0, CPUs: #1 #2 #3 Oct 28 00:12:16.208609 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 00:12:16.208620 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 28 00:12:16.208632 kernel: Memory: 2451432K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15960K init, 2084K bss, 114380K reserved, 0K cma-reserved) Oct 28 00:12:16.208643 kernel: devtmpfs: initialized Oct 28 00:12:16.208657 kernel: x86/mm: Memory block size: 128MB Oct 28 00:12:16.208668 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 00:12:16.208679 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 00:12:16.208690 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 00:12:16.208703 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 00:12:16.208714 kernel: audit: initializing netlink subsys (disabled) Oct 28 00:12:16.208727 kernel: audit: type=2000 audit(1761610333.760:1): state=initialized audit_enabled=0 res=1 Oct 28 00:12:16.208741 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 00:12:16.208753 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 28 00:12:16.208765 kernel: cpuidle: using governor menu Oct 28 00:12:16.208777 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 00:12:16.208790 kernel: dca service started, version 1.12.1 Oct 28 00:12:16.208802 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 28 00:12:16.208814 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 28 00:12:16.208828 kernel: PCI: Using configuration type 1 for base access Oct 28 00:12:16.208840 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 28 00:12:16.208853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 00:12:16.208865 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 00:12:16.208876 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 00:12:16.208889 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 00:12:16.208901 kernel: ACPI: Added _OSI(Module Device) Oct 28 00:12:16.208915 kernel: ACPI: Added _OSI(Processor Device) Oct 28 00:12:16.208940 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 00:12:16.208953 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 00:12:16.208965 kernel: ACPI: Interpreter enabled Oct 28 00:12:16.208977 kernel: ACPI: PM: (supports S0 S3 S5) Oct 28 00:12:16.208989 kernel: ACPI: Using IOAPIC for interrupt routing Oct 28 00:12:16.209001 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 28 00:12:16.209014 kernel: PCI: Using E820 reservations for host bridge windows Oct 28 00:12:16.209029 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 28 00:12:16.209041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 00:12:16.209337 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 00:12:16.209550 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 28 00:12:16.209766 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 28 00:12:16.209788 kernel: PCI host bridge to bus 0000:00 Oct 28 00:12:16.210015 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 28 00:12:16.210225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 28 00:12:16.210424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 28 00:12:16.210611 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 28 00:12:16.210801 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 28 00:12:16.211092 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 28 00:12:16.211306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 00:12:16.211524 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 28 00:12:16.211704 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 28 00:12:16.211899 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 28 00:12:16.212164 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 28 00:12:16.212337 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 28 00:12:16.212504 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 28 00:12:16.212678 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 00:12:16.212847 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 28 00:12:16.213996 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 28 00:12:16.214198 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 28 00:12:16.214387 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 00:12:16.214557 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 28 00:12:16.214725 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 28 00:12:16.214893 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 28 00:12:16.216160 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 00:12:16.216634 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 28 00:12:16.216840 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 28 00:12:16.217066 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 28 00:12:16.217284 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 28 00:12:16.217500 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 28 00:12:16.217708 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 28 00:12:16.217917 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 28 00:12:16.218148 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 28 00:12:16.218352 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 28 00:12:16.218566 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 28 00:12:16.218768 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 28 00:12:16.218789 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 28 00:12:16.218801 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 28 00:12:16.218813 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 28 00:12:16.218824 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 28 00:12:16.218836 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 28 00:12:16.218847 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 28 00:12:16.218859 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 28 00:12:16.218873 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 28 00:12:16.218885 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 28 00:12:16.218897 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 28 00:12:16.218908 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 28 00:12:16.218920 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 28 00:12:16.218944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 28 00:12:16.218957 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 28 00:12:16.218971 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 28 00:12:16.218983 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 28 00:12:16.218994 kernel: iommu: Default domain type: Translated Oct 28 00:12:16.219006 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 28 00:12:16.219018 kernel: PCI: Using ACPI for IRQ routing Oct 28 00:12:16.219029 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 28 00:12:16.219041 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 28 00:12:16.219055 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 28 00:12:16.219318 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 28 00:12:16.219497 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 28 00:12:16.219664 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 28 00:12:16.219676 kernel: vgaarb: loaded Oct 28 00:12:16.219685 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 28 00:12:16.219694 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 28 00:12:16.219706 kernel: clocksource: Switched to clocksource kvm-clock Oct 28 00:12:16.219715 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 00:12:16.219724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 00:12:16.219732 kernel: pnp: PnP ACPI init Oct 28 00:12:16.219908 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 28 00:12:16.219921 kernel: pnp: PnP ACPI: found 6 devices Oct 28 00:12:16.219960 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 28 00:12:16.219970 kernel: NET: Registered PF_INET protocol family Oct 28 00:12:16.219979 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 00:12:16.219988 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 00:12:16.219996 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 00:12:16.220005 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 00:12:16.220014 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 00:12:16.220024 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 00:12:16.220033 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 00:12:16.220042 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 00:12:16.220050 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 00:12:16.220059 kernel: NET: Registered PF_XDP protocol family Oct 28 00:12:16.220232 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 28 00:12:16.220387 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 28 00:12:16.220543 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 28 00:12:16.220696 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 28 00:12:16.220850 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 28 00:12:16.221029 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 28 00:12:16.221041 kernel: PCI: CLS 0 bytes, default 64 Oct 28 00:12:16.221050 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 28 00:12:16.221059 kernel: Initialise system trusted keyrings Oct 28 00:12:16.221071 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 00:12:16.221080 kernel: Key type asymmetric registered Oct 28 00:12:16.221089 kernel: Asymmetric key parser 'x509' registered Oct 28 00:12:16.221097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 28 00:12:16.221106 kernel: io scheduler mq-deadline registered Oct 28 00:12:16.221114 kernel: io scheduler kyber registered Oct 28 00:12:16.221123 kernel: io scheduler bfq registered Oct 28 00:12:16.221142 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 28 00:12:16.221151 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 28 00:12:16.221160 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 28 00:12:16.221169 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 28 00:12:16.221177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 00:12:16.221186 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 28 00:12:16.221195 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 28 00:12:16.221206 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 28 00:12:16.221214 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 28 00:12:16.221223 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 28 00:12:16.221418 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 28 00:12:16.221609 kernel: rtc_cmos 00:04: registered as rtc0 Oct 28 00:12:16.221808 kernel: rtc_cmos 00:04: setting system clock to 2025-10-28T00:12:14 UTC (1761610334) Oct 28 00:12:16.222026 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 28 00:12:16.222043 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 28 00:12:16.222054 kernel: NET: Registered PF_INET6 protocol family Oct 28 00:12:16.222065 kernel: Segment Routing with IPv6 Oct 28 00:12:16.222077 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 00:12:16.222088 kernel: NET: Registered PF_PACKET protocol family Oct 28 00:12:16.222110 kernel: Key type dns_resolver registered Oct 28 00:12:16.222146 kernel: IPI shorthand broadcast: enabled Oct 28 00:12:16.222166 kernel: sched_clock: Marking stable (1201002640, 254010845)->(1518504245, -63490760) Oct 28 00:12:16.222177 kernel: registered taskstats version 1 Oct 28 00:12:16.222189 kernel: Loading compiled-in X.509 certificates Oct 28 00:12:16.222201 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 83e3b158efa5b2676019c86f243fd682d3067554' Oct 28 00:12:16.222211 kernel: Demotion targets for Node 0: null Oct 28 00:12:16.222219 kernel: Key type .fscrypt registered Oct 28 00:12:16.222228 kernel: Key type fscrypt-provisioning registered Oct 28 00:12:16.222240 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 00:12:16.222250 kernel: ima: Allocated hash algorithm: sha1 Oct 28 00:12:16.222261 kernel: ima: No architecture policies found Oct 28 00:12:16.222271 kernel: clk: Disabling unused clocks Oct 28 00:12:16.222282 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 28 00:12:16.222293 kernel: Write protecting the kernel read-only data: 40960k Oct 28 00:12:16.222303 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 28 00:12:16.222316 kernel: Run /init as init process Oct 28 00:12:16.222327 kernel: with arguments: Oct 28 00:12:16.222337 kernel: /init Oct 28 00:12:16.222348 kernel: with environment: Oct 28 00:12:16.222358 kernel: HOME=/ Oct 28 00:12:16.222368 kernel: TERM=linux Oct 28 00:12:16.222379 kernel: SCSI subsystem initialized Oct 28 00:12:16.222389 kernel: libata version 3.00 loaded. Oct 28 00:12:16.222565 kernel: ahci 0000:00:1f.2: version 3.0 Oct 28 00:12:16.222595 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 28 00:12:16.222786 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 28 00:12:16.222999 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 28 00:12:16.223209 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 28 00:12:16.223423 kernel: scsi host0: ahci Oct 28 00:12:16.223626 kernel: scsi host1: ahci Oct 28 00:12:16.223804 kernel: scsi host2: ahci Oct 28 00:12:16.224045 kernel: scsi host3: ahci Oct 28 00:12:16.224239 kernel: scsi host4: ahci Oct 28 00:12:16.224424 kernel: scsi host5: ahci Oct 28 00:12:16.224437 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 28 00:12:16.224447 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 28 00:12:16.224456 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 28 00:12:16.224465 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 28 00:12:16.224474 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 28 00:12:16.224483 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 28 00:12:16.224494 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 28 00:12:16.224504 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 28 00:12:16.224513 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 28 00:12:16.224521 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 00:12:16.224530 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 28 00:12:16.224539 kernel: ata3.00: applying bridge limits Oct 28 00:12:16.224548 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 28 00:12:16.224558 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 00:12:16.224567 kernel: ata3.00: configured for UDMA/100 Oct 28 00:12:16.224576 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 28 00:12:16.224585 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 28 00:12:16.224779 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 28 00:12:16.224988 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 28 00:12:16.225172 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 00:12:16.225185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 00:12:16.225194 kernel: GPT:16515071 != 27000831 Oct 28 00:12:16.225203 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 00:12:16.225212 kernel: GPT:16515071 != 27000831 Oct 28 00:12:16.225220 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 00:12:16.225229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 00:12:16.225241 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225426 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 28 00:12:16.225438 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 28 00:12:16.225619 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 28 00:12:16.225631 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 00:12:16.225641 kernel: device-mapper: uevent: version 1.0.3 Oct 28 00:12:16.225653 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 00:12:16.225662 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 28 00:12:16.225673 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225682 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225691 kernel: raid6: avx2x4 gen() 29386 MB/s Oct 28 00:12:16.225701 kernel: raid6: avx2x2 gen() 29956 MB/s Oct 28 00:12:16.225710 kernel: raid6: avx2x1 gen() 24933 MB/s Oct 28 00:12:16.225719 kernel: raid6: using algorithm avx2x2 gen() 29956 MB/s Oct 28 00:12:16.225728 kernel: raid6: .... xor() 17627 MB/s, rmw enabled Oct 28 00:12:16.225737 kernel: raid6: using avx2x2 recovery algorithm Oct 28 00:12:16.225746 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225754 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225762 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225773 kernel: xor: automatically using best checksumming function avx Oct 28 00:12:16.225782 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225790 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 00:12:16.225799 kernel: BTRFS: device fsid 4fda63c0-e2d9-4674-a954-1a6d4907fb92 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (176) Oct 28 00:12:16.225809 kernel: BTRFS info (device dm-0): first mount of filesystem 4fda63c0-e2d9-4674-a954-1a6d4907fb92 Oct 28 00:12:16.225818 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 28 00:12:16.225827 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 00:12:16.225838 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 00:12:16.225847 kernel: Invalid ELF header magic: != \u007fELF Oct 28 00:12:16.225855 kernel: loop: module loaded Oct 28 00:12:16.225864 kernel: loop0: detected capacity change from 0 to 100120 Oct 28 00:12:16.225875 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 00:12:16.225885 systemd[1]: Successfully made /usr/ read-only. Oct 28 00:12:16.225897 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 00:12:16.225909 systemd[1]: Detected virtualization kvm. Oct 28 00:12:16.225918 systemd[1]: Detected architecture x86-64. Oct 28 00:12:16.225942 systemd[1]: Running in initrd. Oct 28 00:12:16.225951 systemd[1]: No hostname configured, using default hostname. Oct 28 00:12:16.225961 systemd[1]: Hostname set to . Oct 28 00:12:16.225973 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 00:12:16.225982 systemd[1]: Queued start job for default target initrd.target. Oct 28 00:12:16.225991 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 00:12:16.226001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 00:12:16.226010 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 00:12:16.226020 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 00:12:16.226029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 00:12:16.226041 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 00:12:16.226051 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 00:12:16.226060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 00:12:16.226070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 00:12:16.226079 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 00:12:16.226089 systemd[1]: Reached target paths.target - Path Units. Oct 28 00:12:16.226100 systemd[1]: Reached target slices.target - Slice Units. Oct 28 00:12:16.226109 systemd[1]: Reached target swap.target - Swaps. Oct 28 00:12:16.226119 systemd[1]: Reached target timers.target - Timer Units. Oct 28 00:12:16.226128 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 00:12:16.226145 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 00:12:16.226154 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 00:12:16.226164 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 00:12:16.226176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 00:12:16.226185 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 00:12:16.226194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 00:12:16.226204 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 00:12:16.226213 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 00:12:16.226223 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 00:12:16.226234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 00:12:16.226245 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 00:12:16.226258 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 00:12:16.226270 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 00:12:16.226281 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 00:12:16.226293 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 00:12:16.226305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 00:12:16.226319 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 00:12:16.226331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 00:12:16.226343 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 00:12:16.226355 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 00:12:16.226396 systemd-journald[310]: Collecting audit messages is disabled. Oct 28 00:12:16.226423 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 00:12:16.226434 kernel: Bridge firewalling registered Oct 28 00:12:16.226448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 00:12:16.226460 systemd-journald[310]: Journal started Oct 28 00:12:16.226479 systemd-journald[310]: Runtime Journal (/run/log/journal/6b15b636df154d3eaa2945482685311e) is 6M, max 48.3M, 42.2M free. Oct 28 00:12:16.223219 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 28 00:12:16.229111 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 00:12:16.232560 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 00:12:16.237221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 00:12:16.239289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 00:12:16.245367 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 00:12:16.258969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 00:12:16.329243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 00:12:16.334184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 00:12:16.339725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 00:12:16.343046 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 00:12:16.344604 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 00:12:16.362327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 00:12:16.379375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 00:12:16.385682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 00:12:16.416522 systemd-resolved[342]: Positive Trust Anchors: Oct 28 00:12:16.416538 systemd-resolved[342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 00:12:16.416543 systemd-resolved[342]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 00:12:16.416584 systemd-resolved[342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 00:12:16.433890 systemd-resolved[342]: Defaulting to hostname 'linux'. Oct 28 00:12:16.435020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 00:12:16.455653 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 00:12:16.485192 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bb8cbc137ff563234eef33bdd51a5c9ee67c90d62b83654276e2a4d312ac5ee1 Oct 28 00:12:16.608963 kernel: Loading iSCSI transport class v2.0-870. Oct 28 00:12:16.623975 kernel: iscsi: registered transport (tcp) Oct 28 00:12:16.648308 kernel: iscsi: registered transport (qla4xxx) Oct 28 00:12:16.648374 kernel: QLogic iSCSI HBA Driver Oct 28 00:12:16.676974 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 00:12:16.708452 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 00:12:16.711071 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 00:12:16.763380 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 00:12:16.768531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 00:12:16.770400 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 00:12:16.814185 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 00:12:16.819763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 00:12:16.852770 systemd-udevd[592]: Using default interface naming scheme 'v257'. Oct 28 00:12:16.867078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 00:12:16.872024 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 00:12:16.907135 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Oct 28 00:12:16.913367 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 00:12:16.918215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 00:12:16.940211 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 00:12:16.944735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 00:12:16.979550 systemd-networkd[710]: lo: Link UP Oct 28 00:12:16.979563 systemd-networkd[710]: lo: Gained carrier Oct 28 00:12:16.980409 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 00:12:16.980984 systemd[1]: Reached target network.target - Network. Oct 28 00:12:17.042243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 00:12:17.045687 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 00:12:17.124768 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 00:12:17.141961 kernel: cryptd: max_cpu_qlen set to 1000 Oct 28 00:12:17.143844 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 00:12:17.149594 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 28 00:12:17.168958 kernel: AES CTR mode by8 optimization enabled Oct 28 00:12:17.168996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 00:12:17.171472 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 00:12:17.171477 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 00:12:17.173186 systemd-networkd[710]: eth0: Link UP Oct 28 00:12:17.173398 systemd-networkd[710]: eth0: Gained carrier Oct 28 00:12:17.173408 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 00:12:17.177593 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 00:12:17.188244 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 00:12:17.192247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 00:12:17.192377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 00:12:17.197997 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 00:12:17.204669 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 00:12:17.211274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 00:12:17.214451 disk-uuid[818]: Primary Header is updated. Oct 28 00:12:17.214451 disk-uuid[818]: Secondary Entries is updated. Oct 28 00:12:17.214451 disk-uuid[818]: Secondary Header is updated. Oct 28 00:12:17.280618 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 00:12:17.327750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 00:12:17.347581 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 00:12:17.350375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 00:12:17.350893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 00:12:17.359044 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 00:12:17.396913 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 00:12:18.306020 disk-uuid[830]: Warning: The kernel is still using the old partition table. Oct 28 00:12:18.306020 disk-uuid[830]: The new table will be used at the next reboot or after you Oct 28 00:12:18.306020 disk-uuid[830]: run partprobe(8) or kpartx(8) Oct 28 00:12:18.306020 disk-uuid[830]: The operation has completed successfully. Oct 28 00:12:18.332839 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 00:12:18.333055 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 00:12:18.336319 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 00:12:18.392293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Oct 28 00:12:18.392343 kernel: BTRFS info (device vda6): first mount of filesystem e5ad038a-d5ed-4440-8f1c-902f5112301b Oct 28 00:12:18.392361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 00:12:18.397664 kernel: BTRFS info (device vda6): turning on async discard Oct 28 00:12:18.397691 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 00:12:18.404969 kernel: BTRFS info (device vda6): last unmount of filesystem e5ad038a-d5ed-4440-8f1c-902f5112301b Oct 28 00:12:18.406041 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 00:12:18.410737 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 00:12:18.572823 ignition[878]: Ignition 2.22.0 Oct 28 00:12:18.572840 ignition[878]: Stage: fetch-offline Oct 28 00:12:18.572878 ignition[878]: no configs at "/usr/lib/ignition/base.d" Oct 28 00:12:18.572889 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 00:12:18.572986 ignition[878]: parsed url from cmdline: "" Oct 28 00:12:18.572990 ignition[878]: no config URL provided Oct 28 00:12:18.572995 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 00:12:18.573006 ignition[878]: no config at "/usr/lib/ignition/user.ign" Oct 28 00:12:18.573047 ignition[878]: op(1): [started] loading QEMU firmware config module Oct 28 00:12:18.573052 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 00:12:18.583827 ignition[878]: op(1): [finished] loading QEMU firmware config module Oct 28 00:12:18.672886 ignition[878]: parsing config with SHA512: 945bec3fdf85dc3dd242f44efa0a2a6dba322aecc7a11787b671904a02538e31f038ce6120d6edac2a273ddb48e9ec405dce41ce8394fe546a98300e172f326b Oct 28 00:12:18.682177 unknown[878]: fetched base config from "system" Oct 28 00:12:18.682194 unknown[878]: fetched user config from "qemu" Oct 28 00:12:18.682545 ignition[878]: fetch-offline: fetch-offline passed Oct 28 00:12:18.685702 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 00:12:18.682613 ignition[878]: Ignition finished successfully Oct 28 00:12:18.688531 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 00:12:18.689535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 00:12:18.747440 ignition[890]: Ignition 2.22.0 Oct 28 00:12:18.747459 ignition[890]: Stage: kargs Oct 28 00:12:18.747634 ignition[890]: no configs at "/usr/lib/ignition/base.d" Oct 28 00:12:18.747645 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 00:12:18.748369 ignition[890]: kargs: kargs passed Oct 28 00:12:18.752868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 00:12:18.748438 ignition[890]: Ignition finished successfully Oct 28 00:12:18.756651 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 00:12:18.800564 ignition[898]: Ignition 2.22.0 Oct 28 00:12:18.800577 ignition[898]: Stage: disks Oct 28 00:12:18.800727 ignition[898]: no configs at "/usr/lib/ignition/base.d" Oct 28 00:12:18.800737 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 00:12:18.804915 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 00:12:18.801681 ignition[898]: disks: disks passed Oct 28 00:12:18.807158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 00:12:18.801730 ignition[898]: Ignition finished successfully Oct 28 00:12:18.810862 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 00:12:18.815007 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 00:12:18.817871 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 00:12:18.821431 systemd[1]: Reached target basic.target - Basic System. Oct 28 00:12:18.823825 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 00:12:18.872549 systemd-fsck[908]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 00:12:18.881525 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 00:12:18.885545 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 00:12:19.010987 kernel: EXT4-fs (vda9): mounted filesystem b815ee5e-3be8-4bde-b70d-1e4425ecc899 r/w with ordered data mode. Quota mode: none. Oct 28 00:12:19.011811 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 00:12:19.015085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 00:12:19.020832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 00:12:19.024900 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 00:12:19.027022 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 00:12:19.027093 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 00:12:19.027130 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 00:12:19.048317 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 00:12:19.052802 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 00:12:19.059960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Oct 28 00:12:19.059987 kernel: BTRFS info (device vda6): first mount of filesystem e5ad038a-d5ed-4440-8f1c-902f5112301b Oct 28 00:12:19.060002 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 00:12:19.063309 kernel: BTRFS info (device vda6): turning on async discard Oct 28 00:12:19.063338 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 00:12:19.064863 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 00:12:19.114585 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 00:12:19.121181 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Oct 28 00:12:19.127325 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 00:12:19.133356 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 00:12:19.200121 systemd-networkd[710]: eth0: Gained IPv6LL Oct 28 00:12:19.237949 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 00:12:19.256899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 00:12:19.261045 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 00:12:19.281971 kernel: BTRFS info (device vda6): last unmount of filesystem e5ad038a-d5ed-4440-8f1c-902f5112301b Oct 28 00:12:19.293666 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 00:12:19.311665 ignition[1031]: INFO : Ignition 2.22.0 Oct 28 00:12:19.311665 ignition[1031]: INFO : Stage: mount Oct 28 00:12:19.372981 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 00:12:19.372981 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 00:12:19.372981 ignition[1031]: INFO : mount: mount passed Oct 28 00:12:19.372981 ignition[1031]: INFO : Ignition finished successfully Oct 28 00:12:19.365158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 00:12:19.375116 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 00:12:19.376948 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 00:12:19.409888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 00:12:19.437429 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1044) Oct 28 00:12:19.437517 kernel: BTRFS info (device vda6): first mount of filesystem e5ad038a-d5ed-4440-8f1c-902f5112301b Oct 28 00:12:19.437530 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 00:12:19.442978 kernel: BTRFS info (device vda6): turning on async discard Oct 28 00:12:19.443029 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 00:12:19.444648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 00:12:19.483127 ignition[1061]: INFO : Ignition 2.22.0 Oct 28 00:12:19.483127 ignition[1061]: INFO : Stage: files Oct 28 00:12:19.486152 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 00:12:19.486152 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 00:12:19.486152 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Oct 28 00:12:19.491776 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 00:12:19.491776 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 00:12:19.499264 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 00:12:19.501697 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 00:12:19.504388 unknown[1061]: wrote ssh authorized keys file for user: core Oct 28 00:12:19.506188 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 00:12:19.509425 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 00:12:19.512798 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 28 00:12:19.554281 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 00:12:19.618599 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 00:12:19.621893 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 00:12:19.645243 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 00:12:19.645243 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 00:12:19.645243 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 00:12:19.645243 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 00:12:19.645243 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 00:12:19.645243 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 28 00:12:19.952519 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 00:12:20.458365 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 00:12:20.458365 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 00:12:20.465642 ignition[1061]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 00:12:20.498755 ignition[1061]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 00:12:20.504598 ignition[1061]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 00:12:20.507602 ignition[1061]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 00:12:20.507602 ignition[1061]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 00:12:20.507602 ignition[1061]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 00:12:20.507602 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 00:12:20.507602 ignition[1061]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 00:12:20.507602 ignition[1061]: INFO : files: files passed Oct 28 00:12:20.507602 ignition[1061]: INFO : Ignition finished successfully Oct 28 00:12:20.514464 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 00:12:20.523429 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 00:12:20.527406 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 00:12:20.543385 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 00:12:20.543557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 00:12:20.550094 initrd-setup-root-after-ignition[1089]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 00:12:20.555583 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 00:12:20.555583 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 00:12:20.559397 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 00:12:20.558765 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 00:12:20.559861 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 00:12:20.561822 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 00:12:20.627021 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 00:12:20.627174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 00:12:20.628584 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 00:12:20.633367 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 00:12:20.637021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 00:12:20.637997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 00:12:20.683652 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 00:12:20.685992 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 00:12:20.712333 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 00:12:20.712532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 00:12:20.716657 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 00:12:20.720821 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 00:12:20.721695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 00:12:20.721832 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 00:12:20.729302 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 00:12:20.730866 systemd[1]: Stopped target basic.target - Basic System. Oct 28 00:12:20.735976 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 00:12:20.739154 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 00:12:20.740071 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 00:12:20.746059 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 00:12:20.749522 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 00:12:20.752896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 00:12:20.753851 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 00:12:20.759438 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 00:12:20.762497 systemd[1]: Stopped target swap.target - Swaps. Oct 28 00:12:20.765738 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 00:12:20.765914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 00:12:20.771143 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 00:12:20.774517 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 00:12:20.778058 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 00:12:20.778171 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 00:12:20.781799 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 00:12:20.781907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 00:12:20.787083 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 00:12:20.787214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 00:12:20.790561 systemd[1]: Stopped target paths.target - Path Units. Oct 28 00:12:20.793552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 00:12:20.800389 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 00:12:20.801383 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 00:12:20.805470 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 00:12:20.808422 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 00:12:20.808541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 00:12:20.811647 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 00:12:20.811732 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 00:12:20.814671 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 00:12:20.814792 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 00:12:20.817616 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 00:12:20.817755 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 00:12:20.825808 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 00:12:20.826742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 00:12:20.826973 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 00:12:20.850760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 00:12:20.851537 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 00:12:20.851670 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 00:12:20.852248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 00:12:20.852348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 00:12:20.860376 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 00:12:20.860552 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 00:12:20.871815 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 00:12:20.873853 ignition[1116]: INFO : Ignition 2.22.0 Oct 28 00:12:20.873853 ignition[1116]: INFO : Stage: umount Oct 28 00:12:20.873853 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 00:12:20.873853 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 00:12:20.873853 ignition[1116]: INFO : umount: umount passed Oct 28 00:12:20.873853 ignition[1116]: INFO : Ignition finished successfully Oct 28 00:12:20.884181 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 00:12:20.885713 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 00:12:20.885886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 00:12:20.891949 systemd[1]: Stopped target network.target - Network. Oct 28 00:12:20.892680 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 00:12:20.892781 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 00:12:20.895837 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 00:12:20.895916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 00:12:20.898912 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 00:12:20.899005 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 00:12:20.902466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 00:12:20.902519 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 00:12:20.905869 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 00:12:20.908436 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 00:12:20.926593 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 00:12:20.926750 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 00:12:20.933190 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 00:12:20.933308 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 00:12:20.939508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 00:12:20.941075 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 00:12:20.944769 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 00:12:20.944823 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 00:12:20.947517 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 00:12:20.953650 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 00:12:20.953728 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 00:12:20.957433 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 00:12:20.957519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 00:12:20.958573 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 00:12:20.958655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 00:12:20.966131 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 00:12:20.985816 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 00:12:20.986103 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 00:12:20.987923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 00:12:20.988002 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 00:12:20.992556 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 00:12:20.992597 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 00:12:20.995858 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 00:12:20.996075 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 00:12:21.004141 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 00:12:21.004226 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 00:12:21.005194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 00:12:21.005241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 00:12:21.015441 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 00:12:21.018495 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 00:12:21.018560 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 00:12:21.021656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 00:12:21.021711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 00:12:21.025474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 00:12:21.025529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 00:12:21.030332 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 00:12:21.030439 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 00:12:21.033077 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 00:12:21.033200 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 00:12:21.036646 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 00:12:21.036752 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 00:12:21.040913 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 00:12:21.042756 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 00:12:21.042821 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 00:12:21.049348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 00:12:21.066091 systemd[1]: Switching root. Oct 28 00:12:21.100430 systemd-journald[310]: Journal stopped Oct 28 00:12:22.750080 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 28 00:12:22.750161 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 00:12:22.750180 kernel: SELinux: policy capability open_perms=1 Oct 28 00:12:22.750205 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 00:12:22.750221 kernel: SELinux: policy capability always_check_network=0 Oct 28 00:12:22.750237 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 00:12:22.750257 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 00:12:22.750278 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 00:12:22.750294 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 00:12:22.750310 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 00:12:22.750329 kernel: audit: type=1403 audit(1761610341.785:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 00:12:22.750346 systemd[1]: Successfully loaded SELinux policy in 80.421ms. Oct 28 00:12:22.750371 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.973ms. Oct 28 00:12:22.750394 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 00:12:22.750412 systemd[1]: Detected virtualization kvm. Oct 28 00:12:22.750428 systemd[1]: Detected architecture x86-64. Oct 28 00:12:22.750445 systemd[1]: Detected first boot. Oct 28 00:12:22.750465 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 00:12:22.750482 zram_generator::config[1161]: No configuration found. Oct 28 00:12:22.750500 kernel: Guest personality initialized and is inactive Oct 28 00:12:22.750517 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 28 00:12:22.750533 kernel: Initialized host personality Oct 28 00:12:22.750549 kernel: NET: Registered PF_VSOCK protocol family Oct 28 00:12:22.750565 systemd[1]: Populated /etc with preset unit settings. Oct 28 00:12:22.750585 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 00:12:22.750602 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 00:12:22.750619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 00:12:22.750639 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 00:12:22.750656 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 00:12:22.750672 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 00:12:22.750690 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 00:12:22.750710 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 00:12:22.750727 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 00:12:22.750744 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 00:12:22.750761 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 00:12:22.750777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 00:12:22.750794 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 00:12:22.750812 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 00:12:22.750831 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 00:12:22.750849 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 00:12:22.750866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 00:12:22.750883 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 28 00:12:22.750900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 00:12:22.750916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 00:12:22.750964 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 00:12:22.750987 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 00:12:22.751015 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 00:12:22.751032 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 00:12:22.751051 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 00:12:22.751068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 00:12:22.751085 systemd[1]: Reached target slices.target - Slice Units. Oct 28 00:12:22.751105 systemd[1]: Reached target swap.target - Swaps. Oct 28 00:12:22.751122 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 00:12:22.751139 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 00:12:22.751156 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 00:12:22.751173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 00:12:22.751190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 00:12:22.751206 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 00:12:22.751227 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 00:12:22.751244 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 00:12:22.751261 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 00:12:22.751278 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 00:12:22.751295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 00:12:22.751312 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 00:12:22.751329 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 00:12:22.751349 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 00:12:22.751366 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 00:12:22.751383 systemd[1]: Reached target machines.target - Containers. Oct 28 00:12:22.751399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 00:12:22.751418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 00:12:22.751436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 00:12:22.751452 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 00:12:22.751472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 00:12:22.751490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 00:12:22.751507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 00:12:22.751524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 00:12:22.751541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 00:12:22.751558 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 00:12:22.751578 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 00:12:22.751595 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 00:12:22.751611 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 00:12:22.751628 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 00:12:22.751645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 00:12:22.751663 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 00:12:22.751679 kernel: fuse: init (API version 7.41) Oct 28 00:12:22.751698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 00:12:22.751715 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 00:12:22.751732 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 00:12:22.751748 kernel: ACPI: bus type drm_connector registered Oct 28 00:12:22.751764 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 00:12:22.751783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 00:12:22.751803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 00:12:22.751819 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 00:12:22.751835 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 00:12:22.751851 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 00:12:22.751867 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 00:12:22.751909 systemd-journald[1246]: Collecting audit messages is disabled. Oct 28 00:12:22.751957 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 00:12:22.751974 systemd-journald[1246]: Journal started Oct 28 00:12:22.752013 systemd-journald[1246]: Runtime Journal (/run/log/journal/6b15b636df154d3eaa2945482685311e) is 6M, max 48.3M, 42.2M free. Oct 28 00:12:22.381135 systemd[1]: Queued start job for default target multi-user.target. Oct 28 00:12:22.403951 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 00:12:22.404602 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 00:12:22.756161 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 00:12:22.758663 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 00:12:22.760651 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 00:12:22.763077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 00:12:22.765417 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 00:12:22.765649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 00:12:22.767825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 00:12:22.768073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 00:12:22.770194 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 00:12:22.770408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 00:12:22.772414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 00:12:22.772633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 00:12:22.774866 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 00:12:22.775115 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 00:12:22.777137 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 00:12:22.777347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 00:12:22.779423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 00:12:22.781671 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 00:12:22.784814 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 00:12:22.787279 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 00:12:22.803876 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 00:12:22.806187 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 00:12:22.809717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 00:12:22.812723 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 00:12:22.814979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 00:12:22.815021 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 00:12:22.817817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 00:12:22.820411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 00:12:22.823281 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 00:12:22.828016 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 00:12:22.829822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 00:12:22.832258 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 00:12:22.834247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 00:12:22.836142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 00:12:22.841096 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 00:12:22.844795 systemd-journald[1246]: Time spent on flushing to /var/log/journal/6b15b636df154d3eaa2945482685311e is 17.845ms for 971 entries. Oct 28 00:12:22.844795 systemd-journald[1246]: System Journal (/var/log/journal/6b15b636df154d3eaa2945482685311e) is 8M, max 163.5M, 155.5M free. Oct 28 00:12:22.906742 systemd-journald[1246]: Received client request to flush runtime journal. Oct 28 00:12:22.906804 kernel: loop1: detected capacity change from 0 to 128048 Oct 28 00:12:22.844675 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 00:12:22.852881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 00:12:22.856419 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 00:12:22.859184 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 00:12:22.861867 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 00:12:22.868328 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 00:12:22.872776 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 00:12:22.909866 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 00:12:22.913128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 00:12:22.928430 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 00:12:22.930689 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 00:12:22.935926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 00:12:22.938983 kernel: loop2: detected capacity change from 0 to 219144 Oct 28 00:12:22.940429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 00:12:22.952616 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 00:12:22.962102 kernel: loop3: detected capacity change from 0 to 110976 Oct 28 00:12:22.970783 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Oct 28 00:12:22.970800 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Oct 28 00:12:22.979185 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 00:12:22.988955 kernel: loop4: detected capacity change from 0 to 128048 Oct 28 00:12:22.999074 kernel: loop5: detected capacity change from 0 to 219144 Oct 28 00:12:23.004556 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 00:12:23.010946 kernel: loop6: detected capacity change from 0 to 110976 Oct 28 00:12:23.019631 (sd-merge)[1304]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 00:12:23.023861 (sd-merge)[1304]: Merged extensions into '/usr'. Oct 28 00:12:23.029242 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 00:12:23.029263 systemd[1]: Reloading... Oct 28 00:12:23.093996 zram_generator::config[1338]: No configuration found. Oct 28 00:12:23.096314 systemd-resolved[1297]: Positive Trust Anchors: Oct 28 00:12:23.096708 systemd-resolved[1297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 00:12:23.096716 systemd-resolved[1297]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 00:12:23.096747 systemd-resolved[1297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 00:12:23.100744 systemd-resolved[1297]: Defaulting to hostname 'linux'. Oct 28 00:12:23.314708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 00:12:23.315365 systemd[1]: Reloading finished in 285 ms. Oct 28 00:12:23.348481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 00:12:23.350774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 00:12:23.355278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 00:12:23.371275 systemd[1]: Starting ensure-sysext.service... Oct 28 00:12:23.373701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 00:12:23.395244 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 00:12:23.395300 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 00:12:23.395732 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 00:12:23.396162 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 00:12:23.397583 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 00:12:23.398034 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Oct 28 00:12:23.398141 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Oct 28 00:12:23.408210 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 00:12:23.408229 systemd-tmpfiles[1375]: Skipping /boot Oct 28 00:12:23.408611 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Oct 28 00:12:23.408625 systemd[1]: Reloading... Oct 28 00:12:23.419833 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 00:12:23.419850 systemd-tmpfiles[1375]: Skipping /boot Oct 28 00:12:23.468998 zram_generator::config[1411]: No configuration found. Oct 28 00:12:23.642089 systemd[1]: Reloading finished in 232 ms. Oct 28 00:12:23.659332 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 00:12:23.683466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 00:12:23.694240 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 00:12:23.696841 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 00:12:23.706617 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 00:12:23.710150 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 00:12:23.714839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 00:12:23.721246 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 00:12:23.729036 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 00:12:23.729288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 00:12:23.731018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 00:12:23.735284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 00:12:23.743522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 00:12:23.745607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 00:12:23.745728 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 00:12:23.745822 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 00:12:23.748539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 00:12:23.749242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 00:12:23.756715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 00:12:23.757218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 00:12:23.760715 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 00:12:23.761181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 00:12:23.762856 systemd-udevd[1449]: Using default interface naming scheme 'v257'. Oct 28 00:12:23.768834 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 00:12:23.776378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 00:12:23.785573 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 00:12:23.785804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 00:12:23.788193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 00:12:23.793318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 00:12:23.797189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 00:12:23.799086 augenrules[1479]: No rules Oct 28 00:12:23.801526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 00:12:23.803456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 00:12:23.803570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 00:12:23.803703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 00:12:23.804912 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 00:12:23.808839 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 00:12:23.812324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 00:12:23.812531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 00:12:23.815165 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 00:12:23.815373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 00:12:23.817725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 00:12:23.818062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 00:12:23.820792 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 00:12:23.821018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 00:12:23.823674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 00:12:23.832042 systemd[1]: Finished ensure-sysext.service. Oct 28 00:12:23.840055 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 00:12:23.842064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 00:12:23.842125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 00:12:23.845237 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 00:12:23.847463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 00:12:23.850632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 00:12:23.885710 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 28 00:12:23.931004 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 00:12:23.933260 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 00:12:23.949190 systemd-networkd[1499]: lo: Link UP Oct 28 00:12:23.949200 systemd-networkd[1499]: lo: Gained carrier Oct 28 00:12:23.951772 systemd-networkd[1499]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 00:12:23.951783 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 00:12:23.953025 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 00:12:23.953157 systemd-networkd[1499]: eth0: Link UP Oct 28 00:12:23.953366 systemd-networkd[1499]: eth0: Gained carrier Oct 28 00:12:23.953381 systemd-networkd[1499]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 00:12:23.955208 systemd[1]: Reached target network.target - Network. Oct 28 00:12:23.958034 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 00:12:23.961765 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 00:12:23.969022 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 00:12:23.970514 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Oct 28 00:12:24.612882 systemd-resolved[1297]: Clock change detected. Flushing caches. Oct 28 00:12:24.613531 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 00:12:24.613583 systemd-timesyncd[1504]: Initial clock synchronization to Tue 2025-10-28 00:12:24.611850 UTC. Oct 28 00:12:24.617438 kernel: mousedev: PS/2 mouse device common for all mice Oct 28 00:12:24.628258 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 00:12:24.641441 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 28 00:12:24.652621 kernel: ACPI: button: Power Button [PWRF] Oct 28 00:12:24.654467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 00:12:24.657780 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 00:12:24.681760 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 00:12:24.795924 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 28 00:12:24.796550 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 28 00:12:24.833470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 00:12:24.854245 kernel: kvm_amd: TSC scaling supported Oct 28 00:12:24.854316 kernel: kvm_amd: Nested Virtualization enabled Oct 28 00:12:24.854331 kernel: kvm_amd: Nested Paging enabled Oct 28 00:12:24.855057 kernel: kvm_amd: LBR virtualization supported Oct 28 00:12:24.857506 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 28 00:12:24.857561 kernel: kvm_amd: Virtual GIF supported Oct 28 00:12:24.896447 kernel: EDAC MC: Ver: 3.0.0 Oct 28 00:12:24.943603 ldconfig[1446]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 00:12:24.981985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 00:12:25.027379 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 00:12:25.031287 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 00:12:25.064203 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 00:12:25.066597 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 00:12:25.068654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 00:12:25.070915 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 00:12:25.073203 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 28 00:12:25.075397 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 00:12:25.077716 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 00:12:25.080095 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 00:12:25.082406 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 00:12:25.082481 systemd[1]: Reached target paths.target - Path Units. Oct 28 00:12:25.084111 systemd[1]: Reached target timers.target - Timer Units. Oct 28 00:12:25.087078 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 00:12:25.091210 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 00:12:25.095369 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 00:12:25.097918 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 00:12:25.100215 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 00:12:25.110888 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 00:12:25.113231 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 00:12:25.116147 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 00:12:25.119096 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 00:12:25.120804 systemd[1]: Reached target basic.target - Basic System. Oct 28 00:12:25.122534 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 00:12:25.122569 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 00:12:25.124055 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 00:12:25.127327 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 00:12:25.130293 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 00:12:25.133882 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 00:12:25.137066 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 00:12:25.138907 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 00:12:25.140669 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 28 00:12:25.144677 jq[1565]: false Oct 28 00:12:25.144547 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 00:12:25.148527 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 00:12:25.152581 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 00:12:25.156291 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Oct 28 00:12:25.156659 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Oct 28 00:12:25.156764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 00:12:25.163367 extend-filesystems[1566]: Found /dev/vda6 Oct 28 00:12:25.166575 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 00:12:25.168500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 00:12:25.169067 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Oct 28 00:12:25.169067 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 00:12:25.169067 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Oct 28 00:12:25.169141 extend-filesystems[1566]: Found /dev/vda9 Oct 28 00:12:25.168656 oslogin_cache_refresh[1567]: Failure getting users, quitting Oct 28 00:12:25.169198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 00:12:25.168682 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 00:12:25.170002 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 00:12:25.168758 oslogin_cache_refresh[1567]: Refreshing group entry cache Oct 28 00:12:25.173572 extend-filesystems[1566]: Checking size of /dev/vda9 Oct 28 00:12:25.173927 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 00:12:25.178403 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 00:12:25.181319 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 00:12:25.185706 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Oct 28 00:12:25.185706 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 00:12:25.185696 oslogin_cache_refresh[1567]: Failure getting groups, quitting Oct 28 00:12:25.185714 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 00:12:25.186744 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 00:12:25.187575 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 28 00:12:25.187889 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 28 00:12:25.189566 jq[1584]: true Oct 28 00:12:25.190843 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 00:12:25.191212 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 00:12:25.197044 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 00:12:25.197371 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 00:12:25.211172 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 28 00:12:25.215703 jq[1596]: true Oct 28 00:12:25.220494 update_engine[1583]: I20251028 00:12:25.217983 1583 main.cc:92] Flatcar Update Engine starting Oct 28 00:12:25.233506 tar[1593]: linux-amd64/LICENSE Oct 28 00:12:25.233979 tar[1593]: linux-amd64/helm Oct 28 00:12:25.256452 extend-filesystems[1566]: Resized partition /dev/vda9 Oct 28 00:12:25.397560 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (Power Button) Oct 28 00:12:25.399367 dbus-daemon[1563]: [system] SELinux support is enabled Oct 28 00:12:25.399824 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 28 00:12:25.406148 extend-filesystems[1630]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 00:12:25.404262 systemd-logind[1578]: New seat seat0. Oct 28 00:12:25.442103 update_engine[1583]: I20251028 00:12:25.424593 1583 update_check_scheduler.cc:74] Next update check in 9m40s Oct 28 00:12:25.404382 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 00:12:25.436280 systemd[1]: Started update-engine.service - Update Engine. Oct 28 00:12:25.444261 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 00:12:25.446964 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 00:12:25.447113 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 00:12:25.488899 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 00:12:25.489036 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 00:12:25.493153 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 00:12:25.578454 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 00:12:26.349688 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 00:12:26.436242 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 00:12:26.458654 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 00:12:26.481252 tar[1593]: linux-amd64/README.md Oct 28 00:12:26.485126 extend-filesystems[1630]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 00:12:26.485126 extend-filesystems[1630]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 00:12:26.485126 extend-filesystems[1630]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 00:12:26.540567 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Oct 28 00:12:26.542379 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Oct 28 00:12:26.489813 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 00:12:26.490093 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 00:12:26.495716 systemd-networkd[1499]: eth0: Gained IPv6LL Oct 28 00:12:26.851208 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 00:12:26.855447 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 00:12:26.859372 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 00:12:26.862501 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 00:12:26.868711 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 00:12:26.872770 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 00:12:26.875843 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 00:12:26.927710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:26.932153 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 00:12:26.934555 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 00:12:26.940139 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 00:12:26.940494 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 00:12:26.951697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 00:12:26.982299 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 00:12:26.982621 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 00:12:26.985490 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 00:12:26.990993 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 00:12:26.993343 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 00:12:26.996251 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 28 00:12:26.998570 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 00:12:27.051480 containerd[1597]: time="2025-10-28T00:12:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 00:12:27.051480 containerd[1597]: time="2025-10-28T00:12:27.051097125Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 28 00:12:27.053730 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 00:12:27.063421 containerd[1597]: time="2025-10-28T00:12:27.063332927Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.322µs" Oct 28 00:12:27.063421 containerd[1597]: time="2025-10-28T00:12:27.063372321Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 00:12:27.063421 containerd[1597]: time="2025-10-28T00:12:27.063401235Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 00:12:27.063656 containerd[1597]: time="2025-10-28T00:12:27.063621909Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 00:12:27.063656 containerd[1597]: time="2025-10-28T00:12:27.063643279Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 00:12:27.063716 containerd[1597]: time="2025-10-28T00:12:27.063669639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 00:12:27.063742 containerd[1597]: time="2025-10-28T00:12:27.063733007Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 00:12:27.063767 containerd[1597]: time="2025-10-28T00:12:27.063743297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064025 containerd[1597]: time="2025-10-28T00:12:27.063987905Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064025 containerd[1597]: time="2025-10-28T00:12:27.064005739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064025 containerd[1597]: time="2025-10-28T00:12:27.064015878Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064025 containerd[1597]: time="2025-10-28T00:12:27.064023773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064173 containerd[1597]: time="2025-10-28T00:12:27.064117789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064373 containerd[1597]: time="2025-10-28T00:12:27.064339384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064427 containerd[1597]: time="2025-10-28T00:12:27.064373859Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 00:12:27.064427 containerd[1597]: time="2025-10-28T00:12:27.064386002Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 00:12:27.064477 containerd[1597]: time="2025-10-28T00:12:27.064454751Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 00:12:27.064799 containerd[1597]: time="2025-10-28T00:12:27.064777005Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 00:12:27.064868 containerd[1597]: time="2025-10-28T00:12:27.064851214Z" level=info msg="metadata content store policy set" policy=shared Oct 28 00:12:27.070813 containerd[1597]: time="2025-10-28T00:12:27.070759372Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 00:12:27.070813 containerd[1597]: time="2025-10-28T00:12:27.070816178Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070839362Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070850513Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070862285Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070872013Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070884737Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070907710Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070919031Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070933458Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070942956Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 00:12:27.070978 containerd[1597]: time="2025-10-28T00:12:27.070955730Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071083600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071102876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071117022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071127532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071137621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071147449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071158410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071168289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071178858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 00:12:27.071187 containerd[1597]: time="2025-10-28T00:12:27.071189578Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 00:12:27.071438 containerd[1597]: time="2025-10-28T00:12:27.071208734Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 00:12:27.071438 containerd[1597]: time="2025-10-28T00:12:27.071277984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 00:12:27.071438 containerd[1597]: time="2025-10-28T00:12:27.071290187Z" level=info msg="Start snapshots syncer" Oct 28 00:12:27.071438 containerd[1597]: time="2025-10-28T00:12:27.071318891Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 00:12:27.072808 containerd[1597]: time="2025-10-28T00:12:27.072753411Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 00:12:27.072938 containerd[1597]: time="2025-10-28T00:12:27.072810808Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 00:12:27.091992 containerd[1597]: time="2025-10-28T00:12:27.091916321Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 00:12:27.093175 containerd[1597]: time="2025-10-28T00:12:27.092486851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 00:12:27.093175 containerd[1597]: time="2025-10-28T00:12:27.092556572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 00:12:27.093362 containerd[1597]: time="2025-10-28T00:12:27.093138783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 00:12:27.093487 containerd[1597]: time="2025-10-28T00:12:27.093467630Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 00:12:27.093572 containerd[1597]: time="2025-10-28T00:12:27.093531169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 00:12:27.093572 containerd[1597]: time="2025-10-28T00:12:27.093550465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 00:12:27.093572 containerd[1597]: time="2025-10-28T00:12:27.093564131Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093598595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093611259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093624985Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093658468Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093672865Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093681892Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093690999Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093699154Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093709493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093719773Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093740381Z" level=info msg="runtime interface created" Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093745771Z" level=info msg="created NRI interface" Oct 28 00:12:27.093756 containerd[1597]: time="2025-10-28T00:12:27.093754528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 00:12:27.094153 containerd[1597]: time="2025-10-28T00:12:27.093776218Z" level=info msg="Connect containerd service" Oct 28 00:12:27.094153 containerd[1597]: time="2025-10-28T00:12:27.093815232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 00:12:27.094746 containerd[1597]: time="2025-10-28T00:12:27.094704439Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 00:12:27.424338 containerd[1597]: time="2025-10-28T00:12:27.424265836Z" level=info msg="Start subscribing containerd event" Oct 28 00:12:27.424505 containerd[1597]: time="2025-10-28T00:12:27.424344163Z" level=info msg="Start recovering state" Oct 28 00:12:27.424573 containerd[1597]: time="2025-10-28T00:12:27.424517358Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 00:12:27.424655 containerd[1597]: time="2025-10-28T00:12:27.424627815Z" level=info msg="Start event monitor" Oct 28 00:12:27.424684 containerd[1597]: time="2025-10-28T00:12:27.424657481Z" level=info msg="Start cni network conf syncer for default" Oct 28 00:12:27.424710 containerd[1597]: time="2025-10-28T00:12:27.424696143Z" level=info msg="Start streaming server" Oct 28 00:12:27.424736 containerd[1597]: time="2025-10-28T00:12:27.424712474Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 00:12:27.424736 containerd[1597]: time="2025-10-28T00:12:27.424722763Z" level=info msg="runtime interface starting up..." Oct 28 00:12:27.424785 containerd[1597]: time="2025-10-28T00:12:27.424731079Z" level=info msg="starting plugins..." Oct 28 00:12:27.424812 containerd[1597]: time="2025-10-28T00:12:27.424788617Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 00:12:27.424995 containerd[1597]: time="2025-10-28T00:12:27.424963144Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 00:12:27.425054 containerd[1597]: time="2025-10-28T00:12:27.425036461Z" level=info msg="containerd successfully booted in 0.375287s" Oct 28 00:12:27.425246 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 00:12:28.119264 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 00:12:28.123351 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:54708.service - OpenSSH per-connection server daemon (10.0.0.1:54708). Oct 28 00:12:28.235854 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 54708 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:28.238361 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:28.246515 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 00:12:28.250191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 00:12:28.259396 systemd-logind[1578]: New session 1 of user core. Oct 28 00:12:28.288475 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 00:12:28.358018 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 00:12:28.388737 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 00:12:28.392061 systemd-logind[1578]: New session c1 of user core. Oct 28 00:12:28.497099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:28.500384 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 00:12:28.512819 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 00:12:28.582598 systemd[1704]: Queued start job for default target default.target. Oct 28 00:12:28.595171 systemd[1704]: Created slice app.slice - User Application Slice. Oct 28 00:12:28.595204 systemd[1704]: Reached target paths.target - Paths. Oct 28 00:12:28.595282 systemd[1704]: Reached target timers.target - Timers. Oct 28 00:12:28.597530 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 00:12:28.613012 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 00:12:28.613148 systemd[1704]: Reached target sockets.target - Sockets. Oct 28 00:12:28.613187 systemd[1704]: Reached target basic.target - Basic System. Oct 28 00:12:28.613227 systemd[1704]: Reached target default.target - Main User Target. Oct 28 00:12:28.613260 systemd[1704]: Startup finished in 214ms. Oct 28 00:12:28.614531 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 00:12:28.631855 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 00:12:28.635247 systemd[1]: Startup finished in 2.481s (kernel) + 5.911s (initrd) + 6.286s (userspace) = 14.679s. Oct 28 00:12:28.763377 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Oct 28 00:12:28.826308 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:28.827932 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:28.832806 systemd-logind[1578]: New session 2 of user core. Oct 28 00:12:28.839567 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 00:12:28.928988 sshd[1729]: Connection closed by 10.0.0.1 port 54724 Oct 28 00:12:28.931615 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Oct 28 00:12:28.944024 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:54724.service: Deactivated successfully. Oct 28 00:12:28.945944 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 00:12:28.946679 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Oct 28 00:12:28.949333 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:54726.service - OpenSSH per-connection server daemon (10.0.0.1:54726). Oct 28 00:12:28.949984 systemd-logind[1578]: Removed session 2. Oct 28 00:12:29.017711 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 54726 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:29.019444 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:29.024670 systemd-logind[1578]: New session 3 of user core. Oct 28 00:12:29.026288 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 00:12:29.079288 sshd[1743]: Connection closed by 10.0.0.1 port 54726 Oct 28 00:12:29.079739 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Oct 28 00:12:29.099128 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:54726.service: Deactivated successfully. Oct 28 00:12:29.101138 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 00:12:29.101972 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Oct 28 00:12:29.105097 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). Oct 28 00:12:29.105948 systemd-logind[1578]: Removed session 3. Oct 28 00:12:29.163457 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:29.164975 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:29.169490 systemd-logind[1578]: New session 4 of user core. Oct 28 00:12:29.202622 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 00:12:29.210734 kubelet[1715]: E1028 00:12:29.210691 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 00:12:29.214185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 00:12:29.214388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 00:12:29.214772 systemd[1]: kubelet.service: Consumed 2.027s CPU time, 256.8M memory peak. Oct 28 00:12:29.346978 sshd[1753]: Connection closed by 10.0.0.1 port 54738 Oct 28 00:12:29.347678 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 28 00:12:29.357187 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:54738.service: Deactivated successfully. Oct 28 00:12:29.359192 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 00:12:29.359982 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Oct 28 00:12:29.363041 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Oct 28 00:12:29.363821 systemd-logind[1578]: Removed session 4. Oct 28 00:12:29.426026 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:29.427374 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:29.431713 systemd-logind[1578]: New session 5 of user core. Oct 28 00:12:29.445782 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 00:12:29.512334 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 00:12:29.512760 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 00:12:29.531042 sudo[1764]: pam_unix(sudo:session): session closed for user root Oct 28 00:12:29.533140 sshd[1763]: Connection closed by 10.0.0.1 port 54742 Oct 28 00:12:29.533564 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Oct 28 00:12:29.553521 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:54742.service: Deactivated successfully. Oct 28 00:12:29.555262 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 00:12:29.555985 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Oct 28 00:12:29.558957 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:54750.service - OpenSSH per-connection server daemon (10.0.0.1:54750). Oct 28 00:12:29.559718 systemd-logind[1578]: Removed session 5. Oct 28 00:12:29.613632 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 54750 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:29.615142 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:29.619787 systemd-logind[1578]: New session 6 of user core. Oct 28 00:12:29.629557 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 00:12:29.684443 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 00:12:29.684822 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 00:12:29.692277 sudo[1776]: pam_unix(sudo:session): session closed for user root Oct 28 00:12:29.700515 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 00:12:29.700826 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 00:12:29.712258 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 00:12:29.759141 augenrules[1798]: No rules Oct 28 00:12:29.761008 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 00:12:29.761288 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 00:12:29.762524 sudo[1775]: pam_unix(sudo:session): session closed for user root Oct 28 00:12:29.764654 sshd[1774]: Connection closed by 10.0.0.1 port 54750 Oct 28 00:12:29.765058 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Oct 28 00:12:29.774363 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:54750.service: Deactivated successfully. Oct 28 00:12:29.776159 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 00:12:29.776944 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Oct 28 00:12:29.779746 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:54766.service - OpenSSH per-connection server daemon (10.0.0.1:54766). Oct 28 00:12:29.780377 systemd-logind[1578]: Removed session 6. Oct 28 00:12:29.844751 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 54766 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:12:29.846278 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:12:29.851724 systemd-logind[1578]: New session 7 of user core. Oct 28 00:12:29.861698 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 00:12:29.917725 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 00:12:29.918107 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 00:12:30.734678 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 00:12:30.756704 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 00:12:31.728190 dockerd[1831]: time="2025-10-28T00:12:31.728085838Z" level=info msg="Starting up" Oct 28 00:12:31.729069 dockerd[1831]: time="2025-10-28T00:12:31.729024418Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 00:12:31.747166 dockerd[1831]: time="2025-10-28T00:12:31.747114877Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 00:12:32.153236 dockerd[1831]: time="2025-10-28T00:12:32.153094819Z" level=info msg="Loading containers: start." Oct 28 00:12:32.164435 kernel: Initializing XFRM netlink socket Oct 28 00:12:32.838478 systemd-networkd[1499]: docker0: Link UP Oct 28 00:12:33.014029 dockerd[1831]: time="2025-10-28T00:12:33.013936998Z" level=info msg="Loading containers: done." Oct 28 00:12:33.030228 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3655856457-merged.mount: Deactivated successfully. Oct 28 00:12:33.093803 dockerd[1831]: time="2025-10-28T00:12:33.093620835Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 00:12:33.093803 dockerd[1831]: time="2025-10-28T00:12:33.093759926Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 00:12:33.094022 dockerd[1831]: time="2025-10-28T00:12:33.093880653Z" level=info msg="Initializing buildkit" Oct 28 00:12:33.134244 dockerd[1831]: time="2025-10-28T00:12:33.134185655Z" level=info msg="Completed buildkit initialization" Oct 28 00:12:33.138820 dockerd[1831]: time="2025-10-28T00:12:33.138777604Z" level=info msg="Daemon has completed initialization" Oct 28 00:12:33.138952 dockerd[1831]: time="2025-10-28T00:12:33.138865920Z" level=info msg="API listen on /run/docker.sock" Oct 28 00:12:33.139185 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 00:12:33.799666 containerd[1597]: time="2025-10-28T00:12:33.799600597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 28 00:12:34.455159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403016287.mount: Deactivated successfully. Oct 28 00:12:35.327984 containerd[1597]: time="2025-10-28T00:12:35.327919044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:35.328867 containerd[1597]: time="2025-10-28T00:12:35.328800216Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 28 00:12:35.329925 containerd[1597]: time="2025-10-28T00:12:35.329885832Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:35.332667 containerd[1597]: time="2025-10-28T00:12:35.332629767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:35.333648 containerd[1597]: time="2025-10-28T00:12:35.333617819Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.533973029s" Oct 28 00:12:35.333685 containerd[1597]: time="2025-10-28T00:12:35.333657233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 28 00:12:35.334317 containerd[1597]: time="2025-10-28T00:12:35.334133256Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 28 00:12:37.041909 containerd[1597]: time="2025-10-28T00:12:37.041852719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:37.042617 containerd[1597]: time="2025-10-28T00:12:37.042583089Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 28 00:12:37.043744 containerd[1597]: time="2025-10-28T00:12:37.043703550Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:37.046406 containerd[1597]: time="2025-10-28T00:12:37.046355492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:37.047467 containerd[1597]: time="2025-10-28T00:12:37.047432451Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.71326924s" Oct 28 00:12:37.047467 containerd[1597]: time="2025-10-28T00:12:37.047464581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 28 00:12:37.048126 containerd[1597]: time="2025-10-28T00:12:37.047869621Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 28 00:12:37.947518 containerd[1597]: time="2025-10-28T00:12:37.947441318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:37.948453 containerd[1597]: time="2025-10-28T00:12:37.948379076Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 28 00:12:37.950669 containerd[1597]: time="2025-10-28T00:12:37.950620268Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:37.953724 containerd[1597]: time="2025-10-28T00:12:37.953672922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:37.954644 containerd[1597]: time="2025-10-28T00:12:37.954566337Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 906.595807ms" Oct 28 00:12:37.954644 containerd[1597]: time="2025-10-28T00:12:37.954631810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 28 00:12:37.955405 containerd[1597]: time="2025-10-28T00:12:37.955188463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 28 00:12:39.464877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 00:12:39.467174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:39.709281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924393083.mount: Deactivated successfully. Oct 28 00:12:39.717717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:39.749962 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 00:12:40.136766 kubelet[2128]: E1028 00:12:40.136596 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 00:12:40.143748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 00:12:40.143936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 00:12:40.144399 systemd[1]: kubelet.service: Consumed 388ms CPU time, 109.6M memory peak. Oct 28 00:12:40.456832 containerd[1597]: time="2025-10-28T00:12:40.456710386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:40.457388 containerd[1597]: time="2025-10-28T00:12:40.457348172Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 28 00:12:40.458506 containerd[1597]: time="2025-10-28T00:12:40.458471008Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:40.460340 containerd[1597]: time="2025-10-28T00:12:40.460315326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:40.460720 containerd[1597]: time="2025-10-28T00:12:40.460698444Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.505479784s" Oct 28 00:12:40.460765 containerd[1597]: time="2025-10-28T00:12:40.460723391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 28 00:12:40.461116 containerd[1597]: time="2025-10-28T00:12:40.461078256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 28 00:12:41.112111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697100329.mount: Deactivated successfully. Oct 28 00:12:42.627752 containerd[1597]: time="2025-10-28T00:12:42.627690464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:42.628344 containerd[1597]: time="2025-10-28T00:12:42.628303403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 28 00:12:42.629753 containerd[1597]: time="2025-10-28T00:12:42.629705973Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:42.632447 containerd[1597]: time="2025-10-28T00:12:42.632381489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:42.633931 containerd[1597]: time="2025-10-28T00:12:42.633893304Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.172787726s" Oct 28 00:12:42.633993 containerd[1597]: time="2025-10-28T00:12:42.633933449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 28 00:12:42.634698 containerd[1597]: time="2025-10-28T00:12:42.634463563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 28 00:12:43.469785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924061865.mount: Deactivated successfully. Oct 28 00:12:43.475785 containerd[1597]: time="2025-10-28T00:12:43.475711783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:43.477086 containerd[1597]: time="2025-10-28T00:12:43.477027170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 28 00:12:43.478337 containerd[1597]: time="2025-10-28T00:12:43.478305747Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:43.480448 containerd[1597]: time="2025-10-28T00:12:43.480392590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:43.480974 containerd[1597]: time="2025-10-28T00:12:43.480943863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 846.44815ms" Oct 28 00:12:43.480974 containerd[1597]: time="2025-10-28T00:12:43.480972407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 28 00:12:43.481715 containerd[1597]: time="2025-10-28T00:12:43.481635200Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 28 00:12:47.206691 containerd[1597]: time="2025-10-28T00:12:47.206611279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:47.208030 containerd[1597]: time="2025-10-28T00:12:47.207991186Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 28 00:12:47.213929 containerd[1597]: time="2025-10-28T00:12:47.213865711Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:47.217132 containerd[1597]: time="2025-10-28T00:12:47.217071382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:12:47.218310 containerd[1597]: time="2025-10-28T00:12:47.218257165Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.736551574s" Oct 28 00:12:47.218310 containerd[1597]: time="2025-10-28T00:12:47.218293303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 28 00:12:50.394846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 28 00:12:50.397037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:50.635036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:50.653932 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 00:12:50.721369 kubelet[2270]: E1028 00:12:50.721258 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 00:12:50.725829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 00:12:50.726060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 00:12:50.726449 systemd[1]: kubelet.service: Consumed 250ms CPU time, 108.7M memory peak. Oct 28 00:12:50.774311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:50.774534 systemd[1]: kubelet.service: Consumed 250ms CPU time, 108.7M memory peak. Oct 28 00:12:50.777354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:50.858396 systemd[1]: Reload requested from client PID 2286 ('systemctl') (unit session-7.scope)... Oct 28 00:12:50.858429 systemd[1]: Reloading... Oct 28 00:12:50.959438 zram_generator::config[2329]: No configuration found. Oct 28 00:12:51.675116 systemd[1]: Reloading finished in 816 ms. Oct 28 00:12:51.759825 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 00:12:51.760005 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 00:12:51.760462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:51.760529 systemd[1]: kubelet.service: Consumed 241ms CPU time, 98.2M memory peak. Oct 28 00:12:51.762991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:51.967292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:51.980722 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 00:12:52.115551 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 00:12:52.115551 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 00:12:52.116090 kubelet[2378]: I1028 00:12:52.115594 2378 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 00:12:52.937394 kubelet[2378]: I1028 00:12:52.937319 2378 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 00:12:52.937394 kubelet[2378]: I1028 00:12:52.937370 2378 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 00:12:52.937633 kubelet[2378]: I1028 00:12:52.937446 2378 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 00:12:52.937633 kubelet[2378]: I1028 00:12:52.937459 2378 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 00:12:52.937938 kubelet[2378]: I1028 00:12:52.937908 2378 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 00:12:52.948515 kubelet[2378]: E1028 00:12:52.948375 2378 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 00:12:52.949439 kubelet[2378]: I1028 00:12:52.949380 2378 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 00:12:52.953178 kubelet[2378]: I1028 00:12:52.953151 2378 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 00:12:52.958292 kubelet[2378]: I1028 00:12:52.958262 2378 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 00:12:52.959290 kubelet[2378]: I1028 00:12:52.959234 2378 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 00:12:52.959510 kubelet[2378]: I1028 00:12:52.959277 2378 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 00:12:52.959618 kubelet[2378]: I1028 00:12:52.959514 2378 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 00:12:52.959618 kubelet[2378]: I1028 00:12:52.959525 2378 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 00:12:52.959695 kubelet[2378]: I1028 00:12:52.959673 2378 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 00:12:52.963852 kubelet[2378]: I1028 00:12:52.963801 2378 state_mem.go:36] "Initialized new in-memory state store" Oct 28 00:12:52.964633 kubelet[2378]: I1028 00:12:52.964609 2378 kubelet.go:475] "Attempting to sync node with API server" Oct 28 00:12:52.964671 kubelet[2378]: I1028 00:12:52.964635 2378 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 00:12:52.964702 kubelet[2378]: I1028 00:12:52.964677 2378 kubelet.go:387] "Adding apiserver pod source" Oct 28 00:12:52.964738 kubelet[2378]: I1028 00:12:52.964708 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 00:12:52.965404 kubelet[2378]: E1028 00:12:52.965364 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 00:12:52.965809 kubelet[2378]: E1028 00:12:52.965782 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 00:12:52.968607 kubelet[2378]: I1028 00:12:52.968588 2378 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 00:12:52.969331 kubelet[2378]: I1028 00:12:52.969303 2378 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 00:12:52.969368 kubelet[2378]: I1028 00:12:52.969338 2378 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 00:12:52.969455 kubelet[2378]: W1028 00:12:52.969438 2378 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 00:12:52.973616 kubelet[2378]: I1028 00:12:52.973592 2378 server.go:1262] "Started kubelet" Oct 28 00:12:52.977738 kubelet[2378]: I1028 00:12:52.977687 2378 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 00:12:52.977805 kubelet[2378]: I1028 00:12:52.977788 2378 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 00:12:52.978183 kubelet[2378]: I1028 00:12:52.978159 2378 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 00:12:52.978443 kubelet[2378]: I1028 00:12:52.978400 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 00:12:52.980845 kubelet[2378]: I1028 00:12:52.980660 2378 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 00:12:52.981922 kubelet[2378]: E1028 00:12:52.980517 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18727f4d01bb6756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 00:12:52.973553494 +0000 UTC m=+0.986819537,LastTimestamp:2025-10-28 00:12:52.973553494 +0000 UTC m=+0.986819537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 00:12:52.981922 kubelet[2378]: I1028 00:12:52.981703 2378 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 00:12:52.983504 kubelet[2378]: I1028 00:12:52.983487 2378 server.go:310] "Adding debug handlers to kubelet server" Oct 28 00:12:52.986117 kubelet[2378]: E1028 00:12:52.985047 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:52.986117 kubelet[2378]: I1028 00:12:52.985104 2378 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 00:12:52.986117 kubelet[2378]: I1028 00:12:52.985330 2378 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 00:12:52.986117 kubelet[2378]: I1028 00:12:52.985388 2378 reconciler.go:29] "Reconciler: start to sync state" Oct 28 00:12:52.986117 kubelet[2378]: E1028 00:12:52.986047 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 00:12:52.986311 kubelet[2378]: I1028 00:12:52.986190 2378 factory.go:223] Registration of the systemd container factory successfully Oct 28 00:12:52.986311 kubelet[2378]: I1028 00:12:52.986273 2378 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 00:12:52.986631 kubelet[2378]: E1028 00:12:52.986605 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Oct 28 00:12:52.987218 kubelet[2378]: E1028 00:12:52.987196 2378 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 00:12:52.988269 kubelet[2378]: I1028 00:12:52.988240 2378 factory.go:223] Registration of the containerd container factory successfully Oct 28 00:12:53.001933 kubelet[2378]: I1028 00:12:53.001873 2378 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 00:12:53.004932 kubelet[2378]: I1028 00:12:53.004872 2378 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 00:12:53.004932 kubelet[2378]: I1028 00:12:53.004920 2378 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 00:12:53.005207 kubelet[2378]: I1028 00:12:53.004957 2378 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 00:12:53.005207 kubelet[2378]: E1028 00:12:53.005003 2378 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 00:12:53.005875 kubelet[2378]: E1028 00:12:53.005809 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 00:12:53.006037 kubelet[2378]: I1028 00:12:53.005997 2378 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 00:12:53.006037 kubelet[2378]: I1028 00:12:53.006016 2378 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 00:12:53.006128 kubelet[2378]: I1028 00:12:53.006041 2378 state_mem.go:36] "Initialized new in-memory state store" Oct 28 00:12:53.008975 kubelet[2378]: I1028 00:12:53.008950 2378 policy_none.go:49] "None policy: Start" Oct 28 00:12:53.009053 kubelet[2378]: I1028 00:12:53.008983 2378 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 00:12:53.009053 kubelet[2378]: I1028 00:12:53.009023 2378 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 00:12:53.011510 kubelet[2378]: I1028 00:12:53.011478 2378 policy_none.go:47] "Start" Oct 28 00:12:53.016562 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 00:12:53.029955 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 00:12:53.042601 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 00:12:53.044116 kubelet[2378]: E1028 00:12:53.044078 2378 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 00:12:53.044332 kubelet[2378]: I1028 00:12:53.044314 2378 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 00:12:53.044380 kubelet[2378]: I1028 00:12:53.044334 2378 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 00:12:53.044625 kubelet[2378]: I1028 00:12:53.044609 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 00:12:53.045607 kubelet[2378]: E1028 00:12:53.045574 2378 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 00:12:53.045669 kubelet[2378]: E1028 00:12:53.045655 2378 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 00:12:53.118047 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 28 00:12:53.136861 kubelet[2378]: E1028 00:12:53.136751 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:53.140334 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 28 00:12:53.142574 kubelet[2378]: E1028 00:12:53.142550 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:53.144426 systemd[1]: Created slice kubepods-burstable-poda6fdb466ced928cdbe123fecf1367638.slice - libcontainer container kubepods-burstable-poda6fdb466ced928cdbe123fecf1367638.slice. Oct 28 00:12:53.145716 kubelet[2378]: I1028 00:12:53.145695 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 00:12:53.146107 kubelet[2378]: E1028 00:12:53.146073 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Oct 28 00:12:53.146571 kubelet[2378]: E1028 00:12:53.146541 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:53.188367 kubelet[2378]: E1028 00:12:53.188240 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Oct 28 00:12:53.286703 kubelet[2378]: I1028 00:12:53.286629 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:53.286703 kubelet[2378]: I1028 00:12:53.286677 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:53.286888 kubelet[2378]: I1028 00:12:53.286728 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6fdb466ced928cdbe123fecf1367638-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6fdb466ced928cdbe123fecf1367638\") " pod="kube-system/kube-apiserver-localhost" Oct 28 00:12:53.286888 kubelet[2378]: I1028 00:12:53.286741 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6fdb466ced928cdbe123fecf1367638-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6fdb466ced928cdbe123fecf1367638\") " pod="kube-system/kube-apiserver-localhost" Oct 28 00:12:53.286888 kubelet[2378]: I1028 00:12:53.286758 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6fdb466ced928cdbe123fecf1367638-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a6fdb466ced928cdbe123fecf1367638\") " pod="kube-system/kube-apiserver-localhost" Oct 28 00:12:53.286888 kubelet[2378]: I1028 00:12:53.286771 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:53.286888 kubelet[2378]: I1028 00:12:53.286800 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:53.287050 kubelet[2378]: I1028 00:12:53.286839 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 00:12:53.287050 kubelet[2378]: I1028 00:12:53.286853 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:53.348043 kubelet[2378]: I1028 00:12:53.348001 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 00:12:53.348432 kubelet[2378]: E1028 00:12:53.348373 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Oct 28 00:12:53.440918 kubelet[2378]: E1028 00:12:53.440778 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:53.441722 containerd[1597]: time="2025-10-28T00:12:53.441664221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 28 00:12:53.446950 kubelet[2378]: E1028 00:12:53.446906 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:53.448198 containerd[1597]: time="2025-10-28T00:12:53.448131037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 28 00:12:53.449658 kubelet[2378]: E1028 00:12:53.449623 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:53.450104 containerd[1597]: time="2025-10-28T00:12:53.450071716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a6fdb466ced928cdbe123fecf1367638,Namespace:kube-system,Attempt:0,}" Oct 28 00:12:53.589263 kubelet[2378]: E1028 00:12:53.589175 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Oct 28 00:12:53.749993 kubelet[2378]: I1028 00:12:53.749895 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 00:12:53.750436 kubelet[2378]: E1028 00:12:53.750354 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Oct 28 00:12:53.980163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449117956.mount: Deactivated successfully. Oct 28 00:12:53.987963 containerd[1597]: time="2025-10-28T00:12:53.987908129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 00:12:53.988928 containerd[1597]: time="2025-10-28T00:12:53.988869722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 28 00:12:53.993326 containerd[1597]: time="2025-10-28T00:12:53.993297283Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 00:12:53.994355 containerd[1597]: time="2025-10-28T00:12:53.994311405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 00:12:53.995778 containerd[1597]: time="2025-10-28T00:12:53.995722110Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 00:12:53.996533 containerd[1597]: time="2025-10-28T00:12:53.996509997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 00:12:53.997554 containerd[1597]: time="2025-10-28T00:12:53.997531753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 00:12:53.998404 containerd[1597]: time="2025-10-28T00:12:53.998378891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 00:12:53.998570 containerd[1597]: time="2025-10-28T00:12:53.998526298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 547.448766ms" Oct 28 00:12:54.000365 containerd[1597]: time="2025-10-28T00:12:54.000273113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 555.733871ms" Oct 28 00:12:54.003617 containerd[1597]: time="2025-10-28T00:12:54.003575645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 549.693416ms" Oct 28 00:12:54.033461 containerd[1597]: time="2025-10-28T00:12:54.032833013Z" level=info msg="connecting to shim 211a2b7bcc138d6b94170852cdfa08372e840cc8632895def0a8a7f5fafc95c7" address="unix:///run/containerd/s/14100c27499a3808517f3705a2a2d1a3f9ab35a0b2ceec324d3060c917deca0c" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:12:54.035274 containerd[1597]: time="2025-10-28T00:12:54.035242631Z" level=info msg="connecting to shim 65eda951e0013df9ab2061273aa1356e1b45726d00a696d190c682eefd30f65a" address="unix:///run/containerd/s/e7165ce59f35ebcd726b9d85a16d31a8871efa602e65f2981bb2d09bab42de98" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:12:54.042701 containerd[1597]: time="2025-10-28T00:12:54.042632256Z" level=info msg="connecting to shim b198aece208805fa1d8b8cdd0b2d91b978a70098a437a2deda149d58bcc36bb5" address="unix:///run/containerd/s/6c7b65b0e20cf0a3ff30555d453c3c72ef98d1f8b90ce5be0293d51b05a54a78" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:12:54.141592 systemd[1]: Started cri-containerd-b198aece208805fa1d8b8cdd0b2d91b978a70098a437a2deda149d58bcc36bb5.scope - libcontainer container b198aece208805fa1d8b8cdd0b2d91b978a70098a437a2deda149d58bcc36bb5. Oct 28 00:12:54.146963 systemd[1]: Started cri-containerd-211a2b7bcc138d6b94170852cdfa08372e840cc8632895def0a8a7f5fafc95c7.scope - libcontainer container 211a2b7bcc138d6b94170852cdfa08372e840cc8632895def0a8a7f5fafc95c7. Oct 28 00:12:54.148375 systemd[1]: Started cri-containerd-65eda951e0013df9ab2061273aa1356e1b45726d00a696d190c682eefd30f65a.scope - libcontainer container 65eda951e0013df9ab2061273aa1356e1b45726d00a696d190c682eefd30f65a. Oct 28 00:12:54.166929 kubelet[2378]: E1028 00:12:54.166881 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 00:12:54.197237 containerd[1597]: time="2025-10-28T00:12:54.197196815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b198aece208805fa1d8b8cdd0b2d91b978a70098a437a2deda149d58bcc36bb5\"" Oct 28 00:12:54.202663 kubelet[2378]: E1028 00:12:54.202470 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:54.212850 containerd[1597]: time="2025-10-28T00:12:54.212776908Z" level=info msg="CreateContainer within sandbox \"b198aece208805fa1d8b8cdd0b2d91b978a70098a437a2deda149d58bcc36bb5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 00:12:54.213149 containerd[1597]: time="2025-10-28T00:12:54.213047435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"211a2b7bcc138d6b94170852cdfa08372e840cc8632895def0a8a7f5fafc95c7\"" Oct 28 00:12:54.214076 kubelet[2378]: E1028 00:12:54.214010 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:54.219238 containerd[1597]: time="2025-10-28T00:12:54.219201243Z" level=info msg="CreateContainer within sandbox \"211a2b7bcc138d6b94170852cdfa08372e840cc8632895def0a8a7f5fafc95c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 00:12:54.289208 containerd[1597]: time="2025-10-28T00:12:54.288668304Z" level=info msg="Container 59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:12:54.289310 kubelet[2378]: E1028 00:12:54.288791 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 00:12:54.289523 containerd[1597]: time="2025-10-28T00:12:54.289456562Z" level=info msg="Container 3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:12:54.299372 containerd[1597]: time="2025-10-28T00:12:54.299345874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a6fdb466ced928cdbe123fecf1367638,Namespace:kube-system,Attempt:0,} returns sandbox id \"65eda951e0013df9ab2061273aa1356e1b45726d00a696d190c682eefd30f65a\"" Oct 28 00:12:54.300180 kubelet[2378]: E1028 00:12:54.300151 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:54.300540 containerd[1597]: time="2025-10-28T00:12:54.300509827Z" level=info msg="CreateContainer within sandbox \"b198aece208805fa1d8b8cdd0b2d91b978a70098a437a2deda149d58bcc36bb5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27\"" Oct 28 00:12:54.301065 containerd[1597]: time="2025-10-28T00:12:54.301031966Z" level=info msg="StartContainer for \"59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27\"" Oct 28 00:12:54.302140 containerd[1597]: time="2025-10-28T00:12:54.302117100Z" level=info msg="connecting to shim 59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27" address="unix:///run/containerd/s/6c7b65b0e20cf0a3ff30555d453c3c72ef98d1f8b90ce5be0293d51b05a54a78" protocol=ttrpc version=3 Oct 28 00:12:54.306146 containerd[1597]: time="2025-10-28T00:12:54.306098836Z" level=info msg="CreateContainer within sandbox \"211a2b7bcc138d6b94170852cdfa08372e840cc8632895def0a8a7f5fafc95c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd\"" Oct 28 00:12:54.306186 containerd[1597]: time="2025-10-28T00:12:54.306143640Z" level=info msg="CreateContainer within sandbox \"65eda951e0013df9ab2061273aa1356e1b45726d00a696d190c682eefd30f65a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 00:12:54.306523 containerd[1597]: time="2025-10-28T00:12:54.306498856Z" level=info msg="StartContainer for \"3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd\"" Oct 28 00:12:54.307546 containerd[1597]: time="2025-10-28T00:12:54.307523748Z" level=info msg="connecting to shim 3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd" address="unix:///run/containerd/s/14100c27499a3808517f3705a2a2d1a3f9ab35a0b2ceec324d3060c917deca0c" protocol=ttrpc version=3 Oct 28 00:12:54.316233 containerd[1597]: time="2025-10-28T00:12:54.316200567Z" level=info msg="Container de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:12:54.325245 containerd[1597]: time="2025-10-28T00:12:54.325200221Z" level=info msg="CreateContainer within sandbox \"65eda951e0013df9ab2061273aa1356e1b45726d00a696d190c682eefd30f65a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38\"" Oct 28 00:12:54.325634 containerd[1597]: time="2025-10-28T00:12:54.325615911Z" level=info msg="StartContainer for \"de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38\"" Oct 28 00:12:54.326755 containerd[1597]: time="2025-10-28T00:12:54.326502623Z" level=info msg="connecting to shim de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38" address="unix:///run/containerd/s/e7165ce59f35ebcd726b9d85a16d31a8871efa602e65f2981bb2d09bab42de98" protocol=ttrpc version=3 Oct 28 00:12:54.326567 systemd[1]: Started cri-containerd-59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27.scope - libcontainer container 59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27. Oct 28 00:12:54.330713 systemd[1]: Started cri-containerd-3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd.scope - libcontainer container 3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd. Oct 28 00:12:54.352645 systemd[1]: Started cri-containerd-de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38.scope - libcontainer container de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38. Oct 28 00:12:54.390385 kubelet[2378]: E1028 00:12:54.390318 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" Oct 28 00:12:54.402240 kubelet[2378]: E1028 00:12:54.400747 2378 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 00:12:54.406212 containerd[1597]: time="2025-10-28T00:12:54.406165962Z" level=info msg="StartContainer for \"3c8cd580c6ad7651de4a3d35a37a0b212fe3c67a4e7d4641dda83d49df2979bd\" returns successfully" Oct 28 00:12:54.421208 containerd[1597]: time="2025-10-28T00:12:54.421168071Z" level=info msg="StartContainer for \"59ce3287bedb6ba4c9e8ae463d5bf34b6a4331b7fb14b74a3a57b359edf36d27\" returns successfully" Oct 28 00:12:54.446699 containerd[1597]: time="2025-10-28T00:12:54.446655410Z" level=info msg="StartContainer for \"de5d9c544df33e3babffcc1666d40fd9a2b2e7114ef69d2d35f1c825c73f3e38\" returns successfully" Oct 28 00:12:54.552193 kubelet[2378]: I1028 00:12:54.552080 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 00:12:55.016717 kubelet[2378]: E1028 00:12:55.016489 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:55.019456 kubelet[2378]: E1028 00:12:55.018817 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:55.019456 kubelet[2378]: E1028 00:12:55.019170 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:55.019456 kubelet[2378]: E1028 00:12:55.019310 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:55.032595 kubelet[2378]: E1028 00:12:55.031299 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:55.033006 kubelet[2378]: E1028 00:12:55.032978 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:55.999335 kubelet[2378]: E1028 00:12:55.998166 2378 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 00:12:56.010384 kubelet[2378]: I1028 00:12:56.010341 2378 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 00:12:56.010636 kubelet[2378]: E1028 00:12:56.010609 2378 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 28 00:12:56.025780 kubelet[2378]: E1028 00:12:56.025710 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.029599 kubelet[2378]: E1028 00:12:56.029291 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:56.029599 kubelet[2378]: E1028 00:12:56.029426 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:56.030227 kubelet[2378]: E1028 00:12:56.030196 2378 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 00:12:56.030397 kubelet[2378]: E1028 00:12:56.030385 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:56.126279 kubelet[2378]: E1028 00:12:56.126226 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.227088 kubelet[2378]: E1028 00:12:56.227036 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.328003 kubelet[2378]: E1028 00:12:56.327884 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.428612 kubelet[2378]: E1028 00:12:56.428558 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.528970 kubelet[2378]: E1028 00:12:56.528914 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.629589 kubelet[2378]: E1028 00:12:56.629465 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.730154 kubelet[2378]: E1028 00:12:56.730094 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.830875 kubelet[2378]: E1028 00:12:56.830804 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.931850 kubelet[2378]: E1028 00:12:56.931700 2378 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 00:12:56.969986 kubelet[2378]: I1028 00:12:56.969921 2378 apiserver.go:52] "Watching apiserver" Oct 28 00:12:56.986169 kubelet[2378]: I1028 00:12:56.986083 2378 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 00:12:56.986169 kubelet[2378]: I1028 00:12:56.986115 2378 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:57.003396 kubelet[2378]: I1028 00:12:57.003303 2378 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 00:12:57.004454 kubelet[2378]: E1028 00:12:57.004425 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:57.009871 kubelet[2378]: I1028 00:12:57.009812 2378 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 00:12:57.029680 kubelet[2378]: I1028 00:12:57.029637 2378 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 00:12:57.030027 kubelet[2378]: E1028 00:12:57.029992 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:57.035264 kubelet[2378]: E1028 00:12:57.035210 2378 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 00:12:57.035446 kubelet[2378]: E1028 00:12:57.035402 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:58.031328 kubelet[2378]: E1028 00:12:58.031293 2378 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:12:58.672161 systemd[1]: Reload requested from client PID 2664 ('systemctl') (unit session-7.scope)... Oct 28 00:12:58.672184 systemd[1]: Reloading... Oct 28 00:12:58.764475 zram_generator::config[2711]: No configuration found. Oct 28 00:12:59.362693 systemd[1]: Reloading finished in 690 ms. Oct 28 00:12:59.397102 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:59.421105 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 00:12:59.421502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:59.421570 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 125.7M memory peak. Oct 28 00:12:59.423898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 00:12:59.699748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 00:12:59.718719 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 00:12:59.761630 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 00:12:59.761630 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 00:12:59.762046 kubelet[2753]: I1028 00:12:59.761663 2753 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 00:12:59.767397 kubelet[2753]: I1028 00:12:59.767352 2753 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 00:12:59.767397 kubelet[2753]: I1028 00:12:59.767372 2753 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 00:12:59.767397 kubelet[2753]: I1028 00:12:59.767396 2753 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 00:12:59.767397 kubelet[2753]: I1028 00:12:59.767402 2753 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 00:12:59.767640 kubelet[2753]: I1028 00:12:59.767583 2753 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 00:12:59.768553 kubelet[2753]: I1028 00:12:59.768528 2753 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 00:12:59.770249 kubelet[2753]: I1028 00:12:59.770229 2753 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 00:12:59.776429 kubelet[2753]: I1028 00:12:59.774467 2753 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 00:12:59.779350 kubelet[2753]: I1028 00:12:59.779319 2753 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 00:12:59.779568 kubelet[2753]: I1028 00:12:59.779534 2753 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 00:12:59.779731 kubelet[2753]: I1028 00:12:59.779564 2753 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 00:12:59.779820 kubelet[2753]: I1028 00:12:59.779734 2753 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 00:12:59.779820 kubelet[2753]: I1028 00:12:59.779743 2753 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 00:12:59.779820 kubelet[2753]: I1028 00:12:59.779775 2753 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 00:12:59.780585 kubelet[2753]: I1028 00:12:59.780559 2753 state_mem.go:36] "Initialized new in-memory state store" Oct 28 00:12:59.780780 kubelet[2753]: I1028 00:12:59.780756 2753 kubelet.go:475] "Attempting to sync node with API server" Oct 28 00:12:59.780780 kubelet[2753]: I1028 00:12:59.780777 2753 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 00:12:59.780871 kubelet[2753]: I1028 00:12:59.780822 2753 kubelet.go:387] "Adding apiserver pod source" Oct 28 00:12:59.780871 kubelet[2753]: I1028 00:12:59.780862 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 00:12:59.784439 kubelet[2753]: I1028 00:12:59.781684 2753 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 00:12:59.784439 kubelet[2753]: I1028 00:12:59.782116 2753 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 00:12:59.784439 kubelet[2753]: I1028 00:12:59.782139 2753 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 00:12:59.789668 kubelet[2753]: I1028 00:12:59.789645 2753 server.go:1262] "Started kubelet" Oct 28 00:12:59.789978 kubelet[2753]: I1028 00:12:59.789925 2753 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 00:12:59.790027 kubelet[2753]: I1028 00:12:59.789945 2753 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 00:12:59.790027 kubelet[2753]: I1028 00:12:59.790007 2753 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 00:12:59.792080 kubelet[2753]: I1028 00:12:59.790229 2753 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 00:12:59.792080 kubelet[2753]: I1028 00:12:59.791051 2753 server.go:310] "Adding debug handlers to kubelet server" Oct 28 00:12:59.794670 kubelet[2753]: I1028 00:12:59.794620 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 00:12:59.796000 kubelet[2753]: I1028 00:12:59.795318 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 00:12:59.798316 kubelet[2753]: I1028 00:12:59.798292 2753 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 00:12:59.798444 kubelet[2753]: I1028 00:12:59.798429 2753 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 00:12:59.798556 kubelet[2753]: I1028 00:12:59.798543 2753 reconciler.go:29] "Reconciler: start to sync state" Oct 28 00:12:59.798833 kubelet[2753]: E1028 00:12:59.798808 2753 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 00:12:59.799718 kubelet[2753]: I1028 00:12:59.799678 2753 factory.go:223] Registration of the systemd container factory successfully Oct 28 00:12:59.799852 kubelet[2753]: I1028 00:12:59.799815 2753 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 00:12:59.800977 kubelet[2753]: I1028 00:12:59.800956 2753 factory.go:223] Registration of the containerd container factory successfully Oct 28 00:12:59.815383 kubelet[2753]: I1028 00:12:59.815349 2753 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 00:12:59.818145 kubelet[2753]: I1028 00:12:59.818114 2753 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 00:12:59.818145 kubelet[2753]: I1028 00:12:59.818132 2753 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 00:12:59.818298 kubelet[2753]: I1028 00:12:59.818259 2753 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 00:12:59.818323 kubelet[2753]: E1028 00:12:59.818301 2753 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 00:12:59.833754 kubelet[2753]: I1028 00:12:59.833715 2753 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 00:12:59.833754 kubelet[2753]: I1028 00:12:59.833731 2753 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 00:12:59.833754 kubelet[2753]: I1028 00:12:59.833749 2753 state_mem.go:36] "Initialized new in-memory state store" Oct 28 00:12:59.833986 kubelet[2753]: I1028 00:12:59.833883 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 00:12:59.833986 kubelet[2753]: I1028 00:12:59.833911 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 00:12:59.833986 kubelet[2753]: I1028 00:12:59.833930 2753 policy_none.go:49] "None policy: Start" Oct 28 00:12:59.833986 kubelet[2753]: I1028 00:12:59.833940 2753 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 00:12:59.833986 kubelet[2753]: I1028 00:12:59.833952 2753 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 00:12:59.834232 kubelet[2753]: I1028 00:12:59.834036 2753 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 28 00:12:59.834232 kubelet[2753]: I1028 00:12:59.834043 2753 policy_none.go:47] "Start" Oct 28 00:12:59.837827 kubelet[2753]: E1028 00:12:59.837782 2753 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 00:12:59.838038 kubelet[2753]: I1028 00:12:59.838012 2753 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 00:12:59.838156 kubelet[2753]: I1028 00:12:59.838037 2753 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 00:12:59.838531 kubelet[2753]: I1028 00:12:59.838492 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 00:12:59.838933 kubelet[2753]: E1028 00:12:59.838913 2753 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 00:12:59.919153 kubelet[2753]: I1028 00:12:59.919078 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 00:12:59.919153 kubelet[2753]: I1028 00:12:59.919169 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 00:12:59.919479 kubelet[2753]: I1028 00:12:59.919078 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 00:12:59.945296 kubelet[2753]: I1028 00:12:59.945242 2753 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 00:13:00.041964 kubelet[2753]: E1028 00:13:00.041620 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 00:13:00.041964 kubelet[2753]: E1028 00:13:00.041749 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.044027 kubelet[2753]: E1028 00:13:00.043958 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 00:13:00.045980 kubelet[2753]: I1028 00:13:00.045934 2753 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 00:13:00.046072 kubelet[2753]: I1028 00:13:00.046050 2753 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 00:13:00.099678 kubelet[2753]: I1028 00:13:00.099634 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6fdb466ced928cdbe123fecf1367638-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a6fdb466ced928cdbe123fecf1367638\") " pod="kube-system/kube-apiserver-localhost" Oct 28 00:13:00.099850 kubelet[2753]: I1028 00:13:00.099682 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.099850 kubelet[2753]: I1028 00:13:00.099716 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.099850 kubelet[2753]: I1028 00:13:00.099738 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.099850 kubelet[2753]: I1028 00:13:00.099754 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 00:13:00.099850 kubelet[2753]: I1028 00:13:00.099768 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6fdb466ced928cdbe123fecf1367638-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6fdb466ced928cdbe123fecf1367638\") " pod="kube-system/kube-apiserver-localhost" Oct 28 00:13:00.099967 kubelet[2753]: I1028 00:13:00.099918 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6fdb466ced928cdbe123fecf1367638-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6fdb466ced928cdbe123fecf1367638\") " pod="kube-system/kube-apiserver-localhost" Oct 28 00:13:00.100272 kubelet[2753]: I1028 00:13:00.100208 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.100369 kubelet[2753]: I1028 00:13:00.100318 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.342220 kubelet[2753]: E1028 00:13:00.342087 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:00.342220 kubelet[2753]: E1028 00:13:00.342092 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:00.344748 kubelet[2753]: E1028 00:13:00.344723 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:00.782475 kubelet[2753]: I1028 00:13:00.782343 2753 apiserver.go:52] "Watching apiserver" Oct 28 00:13:00.798782 kubelet[2753]: I1028 00:13:00.798737 2753 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 00:13:00.828747 kubelet[2753]: E1028 00:13:00.828709 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:00.829327 kubelet[2753]: E1028 00:13:00.829294 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:00.829511 kubelet[2753]: I1028 00:13:00.829436 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.835154 kubelet[2753]: E1028 00:13:00.835108 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 00:13:00.835306 kubelet[2753]: E1028 00:13:00.835283 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:00.869673 kubelet[2753]: I1028 00:13:00.869578 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.869548558 podStartE2EDuration="3.869548558s" podCreationTimestamp="2025-10-28 00:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 00:13:00.856368111 +0000 UTC m=+1.133999715" watchObservedRunningTime="2025-10-28 00:13:00.869548558 +0000 UTC m=+1.147180182" Oct 28 00:13:00.869835 kubelet[2753]: I1028 00:13:00.869736 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.869729303 podStartE2EDuration="4.869729303s" podCreationTimestamp="2025-10-28 00:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 00:13:00.867757241 +0000 UTC m=+1.145388855" watchObservedRunningTime="2025-10-28 00:13:00.869729303 +0000 UTC m=+1.147360927" Oct 28 00:13:01.830435 kubelet[2753]: E1028 00:13:01.830198 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:01.830435 kubelet[2753]: E1028 00:13:01.830284 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:01.830435 kubelet[2753]: E1028 00:13:01.830338 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:04.372830 kubelet[2753]: I1028 00:13:04.372768 2753 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 00:13:04.373338 containerd[1597]: time="2025-10-28T00:13:04.373293245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 00:13:04.373590 kubelet[2753]: I1028 00:13:04.373543 2753 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 00:13:04.936700 kubelet[2753]: E1028 00:13:04.936615 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:04.957940 kubelet[2753]: I1028 00:13:04.957812 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.957793571 podStartE2EDuration="7.957793571s" podCreationTimestamp="2025-10-28 00:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 00:13:00.876928523 +0000 UTC m=+1.154560127" watchObservedRunningTime="2025-10-28 00:13:04.957793571 +0000 UTC m=+5.235425175" Oct 28 00:13:05.553187 kubelet[2753]: E1028 00:13:05.553123 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:05.835690 kubelet[2753]: E1028 00:13:05.835550 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:05.835690 kubelet[2753]: E1028 00:13:05.835618 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:06.490664 systemd[1]: Created slice kubepods-besteffort-pod18539c11_80ee_43f5_a3ac_dc634f5ab581.slice - libcontainer container kubepods-besteffort-pod18539c11_80ee_43f5_a3ac_dc634f5ab581.slice. Oct 28 00:13:06.539613 kubelet[2753]: I1028 00:13:06.539541 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18539c11-80ee-43f5-a3ac-dc634f5ab581-lib-modules\") pod \"kube-proxy-4p66l\" (UID: \"18539c11-80ee-43f5-a3ac-dc634f5ab581\") " pod="kube-system/kube-proxy-4p66l" Oct 28 00:13:06.539613 kubelet[2753]: I1028 00:13:06.539594 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18539c11-80ee-43f5-a3ac-dc634f5ab581-kube-proxy\") pod \"kube-proxy-4p66l\" (UID: \"18539c11-80ee-43f5-a3ac-dc634f5ab581\") " pod="kube-system/kube-proxy-4p66l" Oct 28 00:13:06.539613 kubelet[2753]: I1028 00:13:06.539610 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18539c11-80ee-43f5-a3ac-dc634f5ab581-xtables-lock\") pod \"kube-proxy-4p66l\" (UID: \"18539c11-80ee-43f5-a3ac-dc634f5ab581\") " pod="kube-system/kube-proxy-4p66l" Oct 28 00:13:06.539836 kubelet[2753]: I1028 00:13:06.539625 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2npc\" (UniqueName: \"kubernetes.io/projected/18539c11-80ee-43f5-a3ac-dc634f5ab581-kube-api-access-z2npc\") pod \"kube-proxy-4p66l\" (UID: \"18539c11-80ee-43f5-a3ac-dc634f5ab581\") " pod="kube-system/kube-proxy-4p66l" Oct 28 00:13:06.837237 kubelet[2753]: E1028 00:13:06.837209 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:06.960993 systemd[1]: Created slice kubepods-besteffort-pod3835a3d5_9912_453c_9dc8_04e07c800106.slice - libcontainer container kubepods-besteffort-pod3835a3d5_9912_453c_9dc8_04e07c800106.slice. Oct 28 00:13:07.041486 kubelet[2753]: I1028 00:13:07.041431 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3835a3d5-9912-453c-9dc8-04e07c800106-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-nxpzk\" (UID: \"3835a3d5-9912-453c-9dc8-04e07c800106\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nxpzk" Oct 28 00:13:07.041486 kubelet[2753]: I1028 00:13:07.041480 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9vzs\" (UniqueName: \"kubernetes.io/projected/3835a3d5-9912-453c-9dc8-04e07c800106-kube-api-access-n9vzs\") pod \"tigera-operator-65cdcdfd6d-nxpzk\" (UID: \"3835a3d5-9912-453c-9dc8-04e07c800106\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nxpzk" Oct 28 00:13:07.162565 kubelet[2753]: E1028 00:13:07.162450 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:07.163314 containerd[1597]: time="2025-10-28T00:13:07.163276686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4p66l,Uid:18539c11-80ee-43f5-a3ac-dc634f5ab581,Namespace:kube-system,Attempt:0,}" Oct 28 00:13:07.260305 containerd[1597]: time="2025-10-28T00:13:07.260247459Z" level=info msg="connecting to shim 5866f26e508821c32eb9b113eb8fbd2dc4fbf1210dbb75c998e81b52d4426e4d" address="unix:///run/containerd/s/b298518cd0357cde6cf83fd1e0899f9e5b6537806a5b5e075ef8cfbf12c61031" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:07.272976 containerd[1597]: time="2025-10-28T00:13:07.272901243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nxpzk,Uid:3835a3d5-9912-453c-9dc8-04e07c800106,Namespace:tigera-operator,Attempt:0,}" Oct 28 00:13:07.291705 systemd[1]: Started cri-containerd-5866f26e508821c32eb9b113eb8fbd2dc4fbf1210dbb75c998e81b52d4426e4d.scope - libcontainer container 5866f26e508821c32eb9b113eb8fbd2dc4fbf1210dbb75c998e81b52d4426e4d. Oct 28 00:13:07.299095 containerd[1597]: time="2025-10-28T00:13:07.299012215Z" level=info msg="connecting to shim 7125f235c9dcd573e28752e7094b92f593d3a8f9e9f67277470b006aa6ec0eb5" address="unix:///run/containerd/s/e096d6b9f4e8f252df42bc62db9714252dca59a96ec010c97cf41896643ce4e8" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:07.331690 systemd[1]: Started cri-containerd-7125f235c9dcd573e28752e7094b92f593d3a8f9e9f67277470b006aa6ec0eb5.scope - libcontainer container 7125f235c9dcd573e28752e7094b92f593d3a8f9e9f67277470b006aa6ec0eb5. Oct 28 00:13:07.338388 containerd[1597]: time="2025-10-28T00:13:07.336561708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4p66l,Uid:18539c11-80ee-43f5-a3ac-dc634f5ab581,Namespace:kube-system,Attempt:0,} returns sandbox id \"5866f26e508821c32eb9b113eb8fbd2dc4fbf1210dbb75c998e81b52d4426e4d\"" Oct 28 00:13:07.338520 kubelet[2753]: E1028 00:13:07.337730 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:07.346921 containerd[1597]: time="2025-10-28T00:13:07.346673596Z" level=info msg="CreateContainer within sandbox \"5866f26e508821c32eb9b113eb8fbd2dc4fbf1210dbb75c998e81b52d4426e4d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 00:13:07.367446 containerd[1597]: time="2025-10-28T00:13:07.366603522Z" level=info msg="Container b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:07.380807 containerd[1597]: time="2025-10-28T00:13:07.380740117Z" level=info msg="CreateContainer within sandbox \"5866f26e508821c32eb9b113eb8fbd2dc4fbf1210dbb75c998e81b52d4426e4d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60\"" Oct 28 00:13:07.381655 containerd[1597]: time="2025-10-28T00:13:07.381607420Z" level=info msg="StartContainer for \"b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60\"" Oct 28 00:13:07.383543 containerd[1597]: time="2025-10-28T00:13:07.383496782Z" level=info msg="connecting to shim b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60" address="unix:///run/containerd/s/b298518cd0357cde6cf83fd1e0899f9e5b6537806a5b5e075ef8cfbf12c61031" protocol=ttrpc version=3 Oct 28 00:13:07.390323 containerd[1597]: time="2025-10-28T00:13:07.390226969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nxpzk,Uid:3835a3d5-9912-453c-9dc8-04e07c800106,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7125f235c9dcd573e28752e7094b92f593d3a8f9e9f67277470b006aa6ec0eb5\"" Oct 28 00:13:07.392143 containerd[1597]: time="2025-10-28T00:13:07.392104196Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 00:13:07.414782 systemd[1]: Started cri-containerd-b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60.scope - libcontainer container b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60. Oct 28 00:13:07.470322 containerd[1597]: time="2025-10-28T00:13:07.470265638Z" level=info msg="StartContainer for \"b941c0196773642f720f18944f212088570f2aea450a8ae5fa3cd8b4832e5f60\" returns successfully" Oct 28 00:13:07.851175 kubelet[2753]: E1028 00:13:07.851134 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:07.869848 kubelet[2753]: I1028 00:13:07.868546 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4p66l" podStartSLOduration=2.868521213 podStartE2EDuration="2.868521213s" podCreationTimestamp="2025-10-28 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 00:13:07.867042068 +0000 UTC m=+8.144673683" watchObservedRunningTime="2025-10-28 00:13:07.868521213 +0000 UTC m=+8.146152837" Oct 28 00:13:09.984894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958459921.mount: Deactivated successfully. Oct 28 00:13:10.257309 kubelet[2753]: E1028 00:13:10.257195 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:10.856763 kubelet[2753]: E1028 00:13:10.856730 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:10.956647 update_engine[1583]: I20251028 00:13:10.956571 1583 update_attempter.cc:509] Updating boot flags... Oct 28 00:13:11.485342 containerd[1597]: time="2025-10-28T00:13:11.485253513Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:11.503197 containerd[1597]: time="2025-10-28T00:13:11.503102261Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 28 00:13:11.577521 containerd[1597]: time="2025-10-28T00:13:11.577458044Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:11.643728 containerd[1597]: time="2025-10-28T00:13:11.643654427Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:11.644254 containerd[1597]: time="2025-10-28T00:13:11.644215077Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.252086775s" Oct 28 00:13:11.644317 containerd[1597]: time="2025-10-28T00:13:11.644256746Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 28 00:13:11.793617 containerd[1597]: time="2025-10-28T00:13:11.793481750Z" level=info msg="CreateContainer within sandbox \"7125f235c9dcd573e28752e7094b92f593d3a8f9e9f67277470b006aa6ec0eb5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 00:13:11.858479 kubelet[2753]: E1028 00:13:11.858443 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:11.962845 containerd[1597]: time="2025-10-28T00:13:11.962785319Z" level=info msg="Container 60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:12.154242 containerd[1597]: time="2025-10-28T00:13:12.154170128Z" level=info msg="CreateContainer within sandbox \"7125f235c9dcd573e28752e7094b92f593d3a8f9e9f67277470b006aa6ec0eb5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b\"" Oct 28 00:13:12.154971 containerd[1597]: time="2025-10-28T00:13:12.154866564Z" level=info msg="StartContainer for \"60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b\"" Oct 28 00:13:12.155787 containerd[1597]: time="2025-10-28T00:13:12.155724505Z" level=info msg="connecting to shim 60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b" address="unix:///run/containerd/s/e096d6b9f4e8f252df42bc62db9714252dca59a96ec010c97cf41896643ce4e8" protocol=ttrpc version=3 Oct 28 00:13:12.213536 systemd[1]: Started cri-containerd-60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b.scope - libcontainer container 60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b. Oct 28 00:13:12.320719 containerd[1597]: time="2025-10-28T00:13:12.320600961Z" level=info msg="StartContainer for \"60bf03f17517f7eac3469be88663575c3e3a731d400e6d9cc651f5bb1d0da30b\" returns successfully" Oct 28 00:13:18.796052 sudo[1811]: pam_unix(sudo:session): session closed for user root Oct 28 00:13:18.799227 sshd[1810]: Connection closed by 10.0.0.1 port 54766 Oct 28 00:13:18.803003 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Oct 28 00:13:18.806903 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:54766.service: Deactivated successfully. Oct 28 00:13:18.810182 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 00:13:18.811105 systemd[1]: session-7.scope: Consumed 6.303s CPU time, 224M memory peak. Oct 28 00:13:18.813884 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Oct 28 00:13:18.815873 systemd-logind[1578]: Removed session 7. Oct 28 00:13:23.320496 kubelet[2753]: I1028 00:13:23.320433 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-nxpzk" podStartSLOduration=13.052879795 podStartE2EDuration="17.320399222s" podCreationTimestamp="2025-10-28 00:13:06 +0000 UTC" firstStartedPulling="2025-10-28 00:13:07.391732372 +0000 UTC m=+7.669363976" lastFinishedPulling="2025-10-28 00:13:11.659251809 +0000 UTC m=+11.936883403" observedRunningTime="2025-10-28 00:13:12.960183331 +0000 UTC m=+13.237814965" watchObservedRunningTime="2025-10-28 00:13:23.320399222 +0000 UTC m=+23.598030826" Oct 28 00:13:23.335556 systemd[1]: Created slice kubepods-besteffort-podc057bf54_9a52_4caa_ad08_e3d42963fee9.slice - libcontainer container kubepods-besteffort-podc057bf54_9a52_4caa_ad08_e3d42963fee9.slice. Oct 28 00:13:23.345657 kubelet[2753]: I1028 00:13:23.345608 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c057bf54-9a52-4caa-ad08-e3d42963fee9-tigera-ca-bundle\") pod \"calico-typha-695fdc8956-p69cx\" (UID: \"c057bf54-9a52-4caa-ad08-e3d42963fee9\") " pod="calico-system/calico-typha-695fdc8956-p69cx" Oct 28 00:13:23.345825 kubelet[2753]: I1028 00:13:23.345698 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c057bf54-9a52-4caa-ad08-e3d42963fee9-typha-certs\") pod \"calico-typha-695fdc8956-p69cx\" (UID: \"c057bf54-9a52-4caa-ad08-e3d42963fee9\") " pod="calico-system/calico-typha-695fdc8956-p69cx" Oct 28 00:13:23.345825 kubelet[2753]: I1028 00:13:23.345718 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4sm\" (UniqueName: \"kubernetes.io/projected/c057bf54-9a52-4caa-ad08-e3d42963fee9-kube-api-access-qf4sm\") pod \"calico-typha-695fdc8956-p69cx\" (UID: \"c057bf54-9a52-4caa-ad08-e3d42963fee9\") " pod="calico-system/calico-typha-695fdc8956-p69cx" Oct 28 00:13:23.609399 systemd[1]: Created slice kubepods-besteffort-podee33b4b9_4c42_4528_bd54_7dae50d1aa2a.slice - libcontainer container kubepods-besteffort-podee33b4b9_4c42_4528_bd54_7dae50d1aa2a.slice. Oct 28 00:13:23.648609 kubelet[2753]: I1028 00:13:23.648550 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-cni-log-dir\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648609 kubelet[2753]: I1028 00:13:23.648604 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-node-certs\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648794 kubelet[2753]: I1028 00:13:23.648626 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-policysync\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648794 kubelet[2753]: I1028 00:13:23.648647 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-flexvol-driver-host\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648794 kubelet[2753]: I1028 00:13:23.648668 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-lib-modules\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648794 kubelet[2753]: I1028 00:13:23.648705 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-cni-net-dir\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648794 kubelet[2753]: I1028 00:13:23.648730 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-xtables-lock\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648902 kubelet[2753]: I1028 00:13:23.648751 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7fj\" (UniqueName: \"kubernetes.io/projected/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-kube-api-access-xc7fj\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648902 kubelet[2753]: I1028 00:13:23.648804 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-cni-bin-dir\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648902 kubelet[2753]: I1028 00:13:23.648858 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-tigera-ca-bundle\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648969 kubelet[2753]: I1028 00:13:23.648915 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-var-lib-calico\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.648969 kubelet[2753]: I1028 00:13:23.648933 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee33b4b9-4c42-4528-bd54-7dae50d1aa2a-var-run-calico\") pod \"calico-node-nm6p8\" (UID: \"ee33b4b9-4c42-4528-bd54-7dae50d1aa2a\") " pod="calico-system/calico-node-nm6p8" Oct 28 00:13:23.664345 kubelet[2753]: E1028 00:13:23.664297 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:23.668962 containerd[1597]: time="2025-10-28T00:13:23.668922902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-695fdc8956-p69cx,Uid:c057bf54-9a52-4caa-ad08-e3d42963fee9,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:23.716072 containerd[1597]: time="2025-10-28T00:13:23.715709007Z" level=info msg="connecting to shim fa332a6ba3513b7d360c08fd210dd03d8d3bf9ae8e387a26856938c12a41dea4" address="unix:///run/containerd/s/1e894bb152f88ec6b2431c1801e8dbca9006c91bee9d2c026c04ea40f41bb9a2" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:23.716428 kubelet[2753]: E1028 00:13:23.716341 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:23.745805 systemd[1]: Started cri-containerd-fa332a6ba3513b7d360c08fd210dd03d8d3bf9ae8e387a26856938c12a41dea4.scope - libcontainer container fa332a6ba3513b7d360c08fd210dd03d8d3bf9ae8e387a26856938c12a41dea4. Oct 28 00:13:23.749859 kubelet[2753]: I1028 00:13:23.749658 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf7db7c-cf1f-40f0-bd37-4896435636ad-kubelet-dir\") pod \"csi-node-driver-hgknx\" (UID: \"9cf7db7c-cf1f-40f0-bd37-4896435636ad\") " pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:23.750064 kubelet[2753]: I1028 00:13:23.749882 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cf7db7c-cf1f-40f0-bd37-4896435636ad-registration-dir\") pod \"csi-node-driver-hgknx\" (UID: \"9cf7db7c-cf1f-40f0-bd37-4896435636ad\") " pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:23.750064 kubelet[2753]: I1028 00:13:23.749900 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9cf7db7c-cf1f-40f0-bd37-4896435636ad-varrun\") pod \"csi-node-driver-hgknx\" (UID: \"9cf7db7c-cf1f-40f0-bd37-4896435636ad\") " pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:23.750064 kubelet[2753]: I1028 00:13:23.749968 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttp4m\" (UniqueName: \"kubernetes.io/projected/9cf7db7c-cf1f-40f0-bd37-4896435636ad-kube-api-access-ttp4m\") pod \"csi-node-driver-hgknx\" (UID: \"9cf7db7c-cf1f-40f0-bd37-4896435636ad\") " pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:23.750482 kubelet[2753]: I1028 00:13:23.750456 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cf7db7c-cf1f-40f0-bd37-4896435636ad-socket-dir\") pod \"csi-node-driver-hgknx\" (UID: \"9cf7db7c-cf1f-40f0-bd37-4896435636ad\") " pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:23.756332 kubelet[2753]: E1028 00:13:23.755284 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.756332 kubelet[2753]: W1028 00:13:23.756330 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.756472 kubelet[2753]: E1028 00:13:23.756358 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.758790 kubelet[2753]: E1028 00:13:23.758580 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.758790 kubelet[2753]: W1028 00:13:23.758595 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.758790 kubelet[2753]: E1028 00:13:23.758607 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.758893 kubelet[2753]: E1028 00:13:23.758848 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.758893 kubelet[2753]: W1028 00:13:23.758859 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.758893 kubelet[2753]: E1028 00:13:23.758871 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.766895 kubelet[2753]: E1028 00:13:23.766867 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.766895 kubelet[2753]: W1028 00:13:23.766889 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.767000 kubelet[2753]: E1028 00:13:23.766907 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.851503 kubelet[2753]: E1028 00:13:23.851467 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.851503 kubelet[2753]: W1028 00:13:23.851493 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.851684 kubelet[2753]: E1028 00:13:23.851517 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.851823 kubelet[2753]: E1028 00:13:23.851808 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.851823 kubelet[2753]: W1028 00:13:23.851820 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.851911 kubelet[2753]: E1028 00:13:23.851830 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.852010 kubelet[2753]: E1028 00:13:23.851998 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.852010 kubelet[2753]: W1028 00:13:23.852006 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.852010 kubelet[2753]: E1028 00:13:23.852014 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.852230 kubelet[2753]: E1028 00:13:23.852210 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.852230 kubelet[2753]: W1028 00:13:23.852220 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.852292 kubelet[2753]: E1028 00:13:23.852230 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.852558 kubelet[2753]: E1028 00:13:23.852528 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.852608 kubelet[2753]: W1028 00:13:23.852554 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.852608 kubelet[2753]: E1028 00:13:23.852576 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.852802 kubelet[2753]: E1028 00:13:23.852780 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.852802 kubelet[2753]: W1028 00:13:23.852798 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.852904 kubelet[2753]: E1028 00:13:23.852811 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.853044 kubelet[2753]: E1028 00:13:23.853026 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.853044 kubelet[2753]: W1028 00:13:23.853041 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.853127 kubelet[2753]: E1028 00:13:23.853053 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.853263 kubelet[2753]: E1028 00:13:23.853249 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.853263 kubelet[2753]: W1028 00:13:23.853261 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.853330 kubelet[2753]: E1028 00:13:23.853271 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.853464 kubelet[2753]: E1028 00:13:23.853449 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.853464 kubelet[2753]: W1028 00:13:23.853460 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.853559 kubelet[2753]: E1028 00:13:23.853470 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.853668 kubelet[2753]: E1028 00:13:23.853653 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.853668 kubelet[2753]: W1028 00:13:23.853663 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.853668 kubelet[2753]: E1028 00:13:23.853671 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.853864 kubelet[2753]: E1028 00:13:23.853848 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.853864 kubelet[2753]: W1028 00:13:23.853858 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.853919 kubelet[2753]: E1028 00:13:23.853866 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.854100 kubelet[2753]: E1028 00:13:23.854078 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.854100 kubelet[2753]: W1028 00:13:23.854095 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.854171 kubelet[2753]: E1028 00:13:23.854109 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.854347 kubelet[2753]: E1028 00:13:23.854328 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.854347 kubelet[2753]: W1028 00:13:23.854342 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.854429 kubelet[2753]: E1028 00:13:23.854353 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.854601 kubelet[2753]: E1028 00:13:23.854582 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.854601 kubelet[2753]: W1028 00:13:23.854596 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.854702 kubelet[2753]: E1028 00:13:23.854608 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.854823 kubelet[2753]: E1028 00:13:23.854807 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.854823 kubelet[2753]: W1028 00:13:23.854821 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.854881 kubelet[2753]: E1028 00:13:23.854831 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.855029 kubelet[2753]: E1028 00:13:23.855015 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.855029 kubelet[2753]: W1028 00:13:23.855025 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.855093 kubelet[2753]: E1028 00:13:23.855035 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.855229 kubelet[2753]: E1028 00:13:23.855212 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.855229 kubelet[2753]: W1028 00:13:23.855225 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.855307 kubelet[2753]: E1028 00:13:23.855236 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.855478 kubelet[2753]: E1028 00:13:23.855465 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.855478 kubelet[2753]: W1028 00:13:23.855476 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.855555 kubelet[2753]: E1028 00:13:23.855485 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.855684 kubelet[2753]: E1028 00:13:23.855656 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.855684 kubelet[2753]: W1028 00:13:23.855667 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.855766 kubelet[2753]: E1028 00:13:23.855687 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.855894 kubelet[2753]: E1028 00:13:23.855879 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.855894 kubelet[2753]: W1028 00:13:23.855889 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.855953 kubelet[2753]: E1028 00:13:23.855898 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.856086 kubelet[2753]: E1028 00:13:23.856071 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.856086 kubelet[2753]: W1028 00:13:23.856082 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.856176 kubelet[2753]: E1028 00:13:23.856092 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.856304 kubelet[2753]: E1028 00:13:23.856286 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.856304 kubelet[2753]: W1028 00:13:23.856297 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.856386 kubelet[2753]: E1028 00:13:23.856307 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.856689 kubelet[2753]: E1028 00:13:23.856602 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.856689 kubelet[2753]: W1028 00:13:23.856612 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.856689 kubelet[2753]: E1028 00:13:23.856621 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.857253 kubelet[2753]: E1028 00:13:23.856876 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.857253 kubelet[2753]: W1028 00:13:23.856888 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.857253 kubelet[2753]: E1028 00:13:23.856896 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.891404 kubelet[2753]: E1028 00:13:23.891316 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.891404 kubelet[2753]: W1028 00:13:23.891340 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.891404 kubelet[2753]: E1028 00:13:23.891357 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:23.927341 containerd[1597]: time="2025-10-28T00:13:23.927287247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-695fdc8956-p69cx,Uid:c057bf54-9a52-4caa-ad08-e3d42963fee9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa332a6ba3513b7d360c08fd210dd03d8d3bf9ae8e387a26856938c12a41dea4\"" Oct 28 00:13:23.928038 kubelet[2753]: E1028 00:13:23.928015 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:23.928697 containerd[1597]: time="2025-10-28T00:13:23.928659550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 00:13:23.980645 kubelet[2753]: E1028 00:13:23.980598 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:23.980645 kubelet[2753]: W1028 00:13:23.980622 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:23.980645 kubelet[2753]: E1028 00:13:23.980646 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:24.062768 kubelet[2753]: E1028 00:13:24.062732 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:24.063215 containerd[1597]: time="2025-10-28T00:13:24.063171331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nm6p8,Uid:ee33b4b9-4c42-4528-bd54-7dae50d1aa2a,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:24.140799 containerd[1597]: time="2025-10-28T00:13:24.140625606Z" level=info msg="connecting to shim 06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f" address="unix:///run/containerd/s/fbde84ffc553359da1ac3a8f40f9d1550d3d87be582f5c54afd192a0b9165c2a" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:24.168597 systemd[1]: Started cri-containerd-06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f.scope - libcontainer container 06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f. Oct 28 00:13:24.197092 containerd[1597]: time="2025-10-28T00:13:24.197053554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nm6p8,Uid:ee33b4b9-4c42-4528-bd54-7dae50d1aa2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\"" Oct 28 00:13:24.197746 kubelet[2753]: E1028 00:13:24.197723 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:24.819326 kubelet[2753]: E1028 00:13:24.818889 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:25.940343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558012118.mount: Deactivated successfully. Oct 28 00:13:26.454399 containerd[1597]: time="2025-10-28T00:13:26.454308405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:26.455516 containerd[1597]: time="2025-10-28T00:13:26.455477344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 28 00:13:26.457069 containerd[1597]: time="2025-10-28T00:13:26.457033069Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:26.461471 containerd[1597]: time="2025-10-28T00:13:26.461395174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.532694717s" Oct 28 00:13:26.461471 containerd[1597]: time="2025-10-28T00:13:26.461467110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 28 00:13:26.461605 containerd[1597]: time="2025-10-28T00:13:26.461580673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:26.462888 containerd[1597]: time="2025-10-28T00:13:26.462855301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 00:13:26.475227 containerd[1597]: time="2025-10-28T00:13:26.475179042Z" level=info msg="CreateContainer within sandbox \"fa332a6ba3513b7d360c08fd210dd03d8d3bf9ae8e387a26856938c12a41dea4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 00:13:26.485053 containerd[1597]: time="2025-10-28T00:13:26.484134978Z" level=info msg="Container 0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:26.492714 containerd[1597]: time="2025-10-28T00:13:26.492653411Z" level=info msg="CreateContainer within sandbox \"fa332a6ba3513b7d360c08fd210dd03d8d3bf9ae8e387a26856938c12a41dea4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7\"" Oct 28 00:13:26.493336 containerd[1597]: time="2025-10-28T00:13:26.493295589Z" level=info msg="StartContainer for \"0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7\"" Oct 28 00:13:26.494386 containerd[1597]: time="2025-10-28T00:13:26.494330486Z" level=info msg="connecting to shim 0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7" address="unix:///run/containerd/s/1e894bb152f88ec6b2431c1801e8dbca9006c91bee9d2c026c04ea40f41bb9a2" protocol=ttrpc version=3 Oct 28 00:13:26.522601 systemd[1]: Started cri-containerd-0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7.scope - libcontainer container 0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7. Oct 28 00:13:26.581252 containerd[1597]: time="2025-10-28T00:13:26.581208935Z" level=info msg="StartContainer for \"0fb87565a09e003ec8af6c1393b6afbc9fcf41baae91a1e533c9103d95b145c7\" returns successfully" Oct 28 00:13:26.819026 kubelet[2753]: E1028 00:13:26.818910 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:26.899865 kubelet[2753]: E1028 00:13:26.899830 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:26.950377 kubelet[2753]: E1028 00:13:26.950343 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.950377 kubelet[2753]: W1028 00:13:26.950366 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.950588 kubelet[2753]: E1028 00:13:26.950389 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.950623 kubelet[2753]: E1028 00:13:26.950614 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.950673 kubelet[2753]: W1028 00:13:26.950635 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.950673 kubelet[2753]: E1028 00:13:26.950647 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.950853 kubelet[2753]: E1028 00:13:26.950834 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.950853 kubelet[2753]: W1028 00:13:26.950846 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.950940 kubelet[2753]: E1028 00:13:26.950856 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.951087 kubelet[2753]: E1028 00:13:26.951068 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.951087 kubelet[2753]: W1028 00:13:26.951080 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.951166 kubelet[2753]: E1028 00:13:26.951091 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.951322 kubelet[2753]: E1028 00:13:26.951288 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.951322 kubelet[2753]: W1028 00:13:26.951308 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.951322 kubelet[2753]: E1028 00:13:26.951319 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.951527 kubelet[2753]: E1028 00:13:26.951510 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.951527 kubelet[2753]: W1028 00:13:26.951519 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.951586 kubelet[2753]: E1028 00:13:26.951528 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.951693 kubelet[2753]: E1028 00:13:26.951676 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.951693 kubelet[2753]: W1028 00:13:26.951684 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.951693 kubelet[2753]: E1028 00:13:26.951692 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.951856 kubelet[2753]: E1028 00:13:26.951840 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.951856 kubelet[2753]: W1028 00:13:26.951848 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.951856 kubelet[2753]: E1028 00:13:26.951855 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.952022 kubelet[2753]: E1028 00:13:26.952006 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.952022 kubelet[2753]: W1028 00:13:26.952014 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.952022 kubelet[2753]: E1028 00:13:26.952022 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.952168 kubelet[2753]: E1028 00:13:26.952152 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.952168 kubelet[2753]: W1028 00:13:26.952160 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.952168 kubelet[2753]: E1028 00:13:26.952168 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.952320 kubelet[2753]: E1028 00:13:26.952303 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.952320 kubelet[2753]: W1028 00:13:26.952312 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.952320 kubelet[2753]: E1028 00:13:26.952320 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.952490 kubelet[2753]: E1028 00:13:26.952474 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.952490 kubelet[2753]: W1028 00:13:26.952482 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.952490 kubelet[2753]: E1028 00:13:26.952489 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.952790 kubelet[2753]: E1028 00:13:26.952773 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.952790 kubelet[2753]: W1028 00:13:26.952782 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.952790 kubelet[2753]: E1028 00:13:26.952790 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.952944 kubelet[2753]: E1028 00:13:26.952927 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.952944 kubelet[2753]: W1028 00:13:26.952935 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.952944 kubelet[2753]: E1028 00:13:26.952944 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.953101 kubelet[2753]: E1028 00:13:26.953085 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.953101 kubelet[2753]: W1028 00:13:26.953093 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.953101 kubelet[2753]: E1028 00:13:26.953100 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.979474 kubelet[2753]: E1028 00:13:26.979457 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.979474 kubelet[2753]: W1028 00:13:26.979469 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.979581 kubelet[2753]: E1028 00:13:26.979479 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.979661 kubelet[2753]: E1028 00:13:26.979647 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.979661 kubelet[2753]: W1028 00:13:26.979657 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.979742 kubelet[2753]: E1028 00:13:26.979665 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.979866 kubelet[2753]: E1028 00:13:26.979849 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.979866 kubelet[2753]: W1028 00:13:26.979864 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.979926 kubelet[2753]: E1028 00:13:26.979875 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.980124 kubelet[2753]: E1028 00:13:26.980107 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.980124 kubelet[2753]: W1028 00:13:26.980123 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.980207 kubelet[2753]: E1028 00:13:26.980133 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.980329 kubelet[2753]: E1028 00:13:26.980312 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.980329 kubelet[2753]: W1028 00:13:26.980322 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.980329 kubelet[2753]: E1028 00:13:26.980329 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.980555 kubelet[2753]: E1028 00:13:26.980538 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.980555 kubelet[2753]: W1028 00:13:26.980551 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.980660 kubelet[2753]: E1028 00:13:26.980563 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.980817 kubelet[2753]: E1028 00:13:26.980789 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.980817 kubelet[2753]: W1028 00:13:26.980800 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.980817 kubelet[2753]: E1028 00:13:26.980813 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.981132 kubelet[2753]: E1028 00:13:26.981114 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.981132 kubelet[2753]: W1028 00:13:26.981127 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.981214 kubelet[2753]: E1028 00:13:26.981139 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.981375 kubelet[2753]: E1028 00:13:26.981334 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.981375 kubelet[2753]: W1028 00:13:26.981352 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.981375 kubelet[2753]: E1028 00:13:26.981364 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.981580 kubelet[2753]: E1028 00:13:26.981563 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.981580 kubelet[2753]: W1028 00:13:26.981574 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.981687 kubelet[2753]: E1028 00:13:26.981584 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.981851 kubelet[2753]: E1028 00:13:26.981837 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.981851 kubelet[2753]: W1028 00:13:26.981848 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.981912 kubelet[2753]: E1028 00:13:26.981857 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.982948 kubelet[2753]: E1028 00:13:26.982930 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.982948 kubelet[2753]: W1028 00:13:26.982944 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.983041 kubelet[2753]: E1028 00:13:26.982957 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.983184 kubelet[2753]: E1028 00:13:26.983168 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.983184 kubelet[2753]: W1028 00:13:26.983179 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.983252 kubelet[2753]: E1028 00:13:26.983189 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.983422 kubelet[2753]: E1028 00:13:26.983377 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.983422 kubelet[2753]: W1028 00:13:26.983399 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.983510 kubelet[2753]: E1028 00:13:26.983436 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.983681 kubelet[2753]: E1028 00:13:26.983663 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.983681 kubelet[2753]: W1028 00:13:26.983677 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.983767 kubelet[2753]: E1028 00:13:26.983689 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.983945 kubelet[2753]: E1028 00:13:26.983929 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.983945 kubelet[2753]: W1028 00:13:26.983942 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.984024 kubelet[2753]: E1028 00:13:26.983953 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.984167 kubelet[2753]: E1028 00:13:26.984153 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.984197 kubelet[2753]: W1028 00:13:26.984166 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.984197 kubelet[2753]: E1028 00:13:26.984178 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:26.984612 kubelet[2753]: E1028 00:13:26.984597 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 00:13:26.984612 kubelet[2753]: W1028 00:13:26.984610 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 00:13:26.984683 kubelet[2753]: E1028 00:13:26.984622 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 00:13:27.753475 containerd[1597]: time="2025-10-28T00:13:27.753397319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:27.754173 containerd[1597]: time="2025-10-28T00:13:27.754139885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 28 00:13:27.755311 containerd[1597]: time="2025-10-28T00:13:27.755285811Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:27.757449 containerd[1597]: time="2025-10-28T00:13:27.757375741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:27.757898 containerd[1597]: time="2025-10-28T00:13:27.757850434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.294967301s" Oct 28 00:13:27.757898 containerd[1597]: time="2025-10-28T00:13:27.757892723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 28 00:13:27.761551 containerd[1597]: time="2025-10-28T00:13:27.761524744Z" level=info msg="CreateContainer within sandbox \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 00:13:27.770316 containerd[1597]: time="2025-10-28T00:13:27.770265922Z" level=info msg="Container fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:27.780881 containerd[1597]: time="2025-10-28T00:13:27.780845168Z" level=info msg="CreateContainer within sandbox \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\"" Oct 28 00:13:27.781684 containerd[1597]: time="2025-10-28T00:13:27.781639883Z" level=info msg="StartContainer for \"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\"" Oct 28 00:13:27.783246 containerd[1597]: time="2025-10-28T00:13:27.783217720Z" level=info msg="connecting to shim fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7" address="unix:///run/containerd/s/fbde84ffc553359da1ac3a8f40f9d1550d3d87be582f5c54afd192a0b9165c2a" protocol=ttrpc version=3 Oct 28 00:13:27.815699 systemd[1]: Started cri-containerd-fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7.scope - libcontainer container fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7. Oct 28 00:13:27.877397 systemd[1]: cri-containerd-fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7.scope: Deactivated successfully. Oct 28 00:13:27.879399 containerd[1597]: time="2025-10-28T00:13:27.879336794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\" id:\"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\" pid:3421 exited_at:{seconds:1761610407 nanos:878644452}" Oct 28 00:13:28.558240 containerd[1597]: time="2025-10-28T00:13:28.558171151Z" level=info msg="received exit event container_id:\"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\" id:\"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\" pid:3421 exited_at:{seconds:1761610407 nanos:878644452}" Oct 28 00:13:28.560989 containerd[1597]: time="2025-10-28T00:13:28.560907146Z" level=info msg="StartContainer for \"fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7\" returns successfully" Oct 28 00:13:28.561867 kubelet[2753]: I1028 00:13:28.561834 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 00:13:28.562644 kubelet[2753]: E1028 00:13:28.562184 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:28.584439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe373c6abf2424c25fdfaaa3702e183007709a66da81f88db5f8117b9cf578e7-rootfs.mount: Deactivated successfully. Oct 28 00:13:28.938714 kubelet[2753]: E1028 00:13:28.819495 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:29.566520 kubelet[2753]: E1028 00:13:29.566480 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:29.567461 containerd[1597]: time="2025-10-28T00:13:29.567404478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 00:13:29.582963 kubelet[2753]: I1028 00:13:29.582877 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-695fdc8956-p69cx" podStartSLOduration=4.048557923 podStartE2EDuration="6.582860594s" podCreationTimestamp="2025-10-28 00:13:23 +0000 UTC" firstStartedPulling="2025-10-28 00:13:23.928439646 +0000 UTC m=+24.206071251" lastFinishedPulling="2025-10-28 00:13:26.462742318 +0000 UTC m=+26.740373922" observedRunningTime="2025-10-28 00:13:26.909105949 +0000 UTC m=+27.186737553" watchObservedRunningTime="2025-10-28 00:13:29.582860594 +0000 UTC m=+29.860492198" Oct 28 00:13:30.819266 kubelet[2753]: E1028 00:13:30.819117 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:32.819171 kubelet[2753]: E1028 00:13:32.819094 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:33.083644 kubelet[2753]: I1028 00:13:33.083489 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 00:13:33.083992 kubelet[2753]: E1028 00:13:33.083973 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:33.347119 containerd[1597]: time="2025-10-28T00:13:33.347057017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:33.348302 containerd[1597]: time="2025-10-28T00:13:33.348267751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 28 00:13:33.350479 containerd[1597]: time="2025-10-28T00:13:33.350436495Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:33.353348 containerd[1597]: time="2025-10-28T00:13:33.353300616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:33.354128 containerd[1597]: time="2025-10-28T00:13:33.354078197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.786603918s" Oct 28 00:13:33.354128 containerd[1597]: time="2025-10-28T00:13:33.354117050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 28 00:13:33.359532 containerd[1597]: time="2025-10-28T00:13:33.359459077Z" level=info msg="CreateContainer within sandbox \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 00:13:33.376254 containerd[1597]: time="2025-10-28T00:13:33.373933622Z" level=info msg="Container fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:33.385173 containerd[1597]: time="2025-10-28T00:13:33.385127874Z" level=info msg="CreateContainer within sandbox \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\"" Oct 28 00:13:33.385818 containerd[1597]: time="2025-10-28T00:13:33.385792823Z" level=info msg="StartContainer for \"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\"" Oct 28 00:13:33.387527 containerd[1597]: time="2025-10-28T00:13:33.387490243Z" level=info msg="connecting to shim fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3" address="unix:///run/containerd/s/fbde84ffc553359da1ac3a8f40f9d1550d3d87be582f5c54afd192a0b9165c2a" protocol=ttrpc version=3 Oct 28 00:13:33.413579 systemd[1]: Started cri-containerd-fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3.scope - libcontainer container fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3. Oct 28 00:13:33.466833 containerd[1597]: time="2025-10-28T00:13:33.466778901Z" level=info msg="StartContainer for \"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\" returns successfully" Oct 28 00:13:33.576536 kubelet[2753]: E1028 00:13:33.576458 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:33.576536 kubelet[2753]: E1028 00:13:33.576482 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:34.577997 kubelet[2753]: E1028 00:13:34.577942 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:34.819615 kubelet[2753]: E1028 00:13:34.819462 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:34.842043 containerd[1597]: time="2025-10-28T00:13:34.841995925Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 00:13:34.845184 systemd[1]: cri-containerd-fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3.scope: Deactivated successfully. Oct 28 00:13:34.845699 systemd[1]: cri-containerd-fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3.scope: Consumed 709ms CPU time, 179.3M memory peak, 3.6M read from disk, 171.3M written to disk. Oct 28 00:13:34.847996 containerd[1597]: time="2025-10-28T00:13:34.847943718Z" level=info msg="received exit event container_id:\"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\" id:\"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\" pid:3483 exited_at:{seconds:1761610414 nanos:847721260}" Oct 28 00:13:34.848086 containerd[1597]: time="2025-10-28T00:13:34.847997680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\" id:\"fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3\" pid:3483 exited_at:{seconds:1761610414 nanos:847721260}" Oct 28 00:13:34.876014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd016a12b6133af741e99a0270d790211ab70b467602b83818120632fde2e8b3-rootfs.mount: Deactivated successfully. Oct 28 00:13:34.881321 kubelet[2753]: I1028 00:13:34.881010 2753 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 28 00:13:35.103847 systemd[1]: Created slice kubepods-burstable-podffaa584b_c0eb_4855_80a7_bb13ffeca77a.slice - libcontainer container kubepods-burstable-podffaa584b_c0eb_4855_80a7_bb13ffeca77a.slice. Oct 28 00:13:35.114388 systemd[1]: Created slice kubepods-besteffort-pod9eebb6dc_0fd7_4bdc_8419_21aee039a4fd.slice - libcontainer container kubepods-besteffort-pod9eebb6dc_0fd7_4bdc_8419_21aee039a4fd.slice. Oct 28 00:13:35.122181 systemd[1]: Created slice kubepods-besteffort-pod857b2565_c255_4b2a_a804_4d8f469fd36f.slice - libcontainer container kubepods-besteffort-pod857b2565_c255_4b2a_a804_4d8f469fd36f.slice. Oct 28 00:13:35.131071 systemd[1]: Created slice kubepods-burstable-pod0e5367a5_74db_4442_a121_3a4c264915e4.slice - libcontainer container kubepods-burstable-pod0e5367a5_74db_4442_a121_3a4c264915e4.slice. Oct 28 00:13:35.138443 kubelet[2753]: I1028 00:13:35.137661 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrxh\" (UniqueName: \"kubernetes.io/projected/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-kube-api-access-dfrxh\") pod \"whisker-58d55c7c46-5npkm\" (UID: \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\") " pod="calico-system/whisker-58d55c7c46-5npkm" Oct 28 00:13:35.138443 kubelet[2753]: I1028 00:13:35.137703 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/857b2565-c255-4b2a-a804-4d8f469fd36f-config\") pod \"goldmane-7c778bb748-pd76c\" (UID: \"857b2565-c255-4b2a-a804-4d8f469fd36f\") " pod="calico-system/goldmane-7c778bb748-pd76c" Oct 28 00:13:35.138443 kubelet[2753]: I1028 00:13:35.137720 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4g5\" (UniqueName: \"kubernetes.io/projected/02d0f3a2-6615-4333-9168-153cfad8a1a2-kube-api-access-tv4g5\") pod \"calico-apiserver-9c466c9c-dq2lt\" (UID: \"02d0f3a2-6615-4333-9168-153cfad8a1a2\") " pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" Oct 28 00:13:35.138443 kubelet[2753]: I1028 00:13:35.137791 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e5367a5-74db-4442-a121-3a4c264915e4-config-volume\") pod \"coredns-66bc5c9577-flcqw\" (UID: \"0e5367a5-74db-4442-a121-3a4c264915e4\") " pod="kube-system/coredns-66bc5c9577-flcqw" Oct 28 00:13:35.138443 kubelet[2753]: I1028 00:13:35.137859 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e103a9d-d0d4-4b11-9367-559f1c47a552-calico-apiserver-certs\") pod \"calico-apiserver-76b9897cff-8q7s2\" (UID: \"3e103a9d-d0d4-4b11-9367-559f1c47a552\") " pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" Oct 28 00:13:35.138718 kubelet[2753]: I1028 00:13:35.137882 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb03ae87-19b8-4ccf-ad2d-924a8b3b4421-calico-apiserver-certs\") pod \"calico-apiserver-9c466c9c-29kbz\" (UID: \"eb03ae87-19b8-4ccf-ad2d-924a8b3b4421\") " pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" Oct 28 00:13:35.138718 kubelet[2753]: I1028 00:13:35.137908 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vzx\" (UniqueName: \"kubernetes.io/projected/857b2565-c255-4b2a-a804-4d8f469fd36f-kube-api-access-57vzx\") pod \"goldmane-7c778bb748-pd76c\" (UID: \"857b2565-c255-4b2a-a804-4d8f469fd36f\") " pod="calico-system/goldmane-7c778bb748-pd76c" Oct 28 00:13:35.138718 kubelet[2753]: I1028 00:13:35.137921 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e002b83f-c358-4e16-aba8-6f13c28c0b61-tigera-ca-bundle\") pod \"calico-kube-controllers-6dccbd5fb7-7mmn5\" (UID: \"e002b83f-c358-4e16-aba8-6f13c28c0b61\") " pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" Oct 28 00:13:35.138718 kubelet[2753]: I1028 00:13:35.137935 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5x2n\" (UniqueName: \"kubernetes.io/projected/eb03ae87-19b8-4ccf-ad2d-924a8b3b4421-kube-api-access-w5x2n\") pod \"calico-apiserver-9c466c9c-29kbz\" (UID: \"eb03ae87-19b8-4ccf-ad2d-924a8b3b4421\") " pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" Oct 28 00:13:35.138718 kubelet[2753]: I1028 00:13:35.137954 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-backend-key-pair\") pod \"whisker-58d55c7c46-5npkm\" (UID: \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\") " pod="calico-system/whisker-58d55c7c46-5npkm" Oct 28 00:13:35.138883 kubelet[2753]: I1028 00:13:35.137972 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/857b2565-c255-4b2a-a804-4d8f469fd36f-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-pd76c\" (UID: \"857b2565-c255-4b2a-a804-4d8f469fd36f\") " pod="calico-system/goldmane-7c778bb748-pd76c" Oct 28 00:13:35.138883 kubelet[2753]: I1028 00:13:35.137988 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/857b2565-c255-4b2a-a804-4d8f469fd36f-goldmane-key-pair\") pod \"goldmane-7c778bb748-pd76c\" (UID: \"857b2565-c255-4b2a-a804-4d8f469fd36f\") " pod="calico-system/goldmane-7c778bb748-pd76c" Oct 28 00:13:35.138883 kubelet[2753]: I1028 00:13:35.138006 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02d0f3a2-6615-4333-9168-153cfad8a1a2-calico-apiserver-certs\") pod \"calico-apiserver-9c466c9c-dq2lt\" (UID: \"02d0f3a2-6615-4333-9168-153cfad8a1a2\") " pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" Oct 28 00:13:35.138883 kubelet[2753]: I1028 00:13:35.138042 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaa584b-c0eb-4855-80a7-bb13ffeca77a-config-volume\") pod \"coredns-66bc5c9577-v8g72\" (UID: \"ffaa584b-c0eb-4855-80a7-bb13ffeca77a\") " pod="kube-system/coredns-66bc5c9577-v8g72" Oct 28 00:13:35.138883 kubelet[2753]: I1028 00:13:35.138062 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-ca-bundle\") pod \"whisker-58d55c7c46-5npkm\" (UID: \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\") " pod="calico-system/whisker-58d55c7c46-5npkm" Oct 28 00:13:35.139038 kubelet[2753]: I1028 00:13:35.138079 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8tjr\" (UniqueName: \"kubernetes.io/projected/ffaa584b-c0eb-4855-80a7-bb13ffeca77a-kube-api-access-c8tjr\") pod \"coredns-66bc5c9577-v8g72\" (UID: \"ffaa584b-c0eb-4855-80a7-bb13ffeca77a\") " pod="kube-system/coredns-66bc5c9577-v8g72" Oct 28 00:13:35.139038 kubelet[2753]: I1028 00:13:35.138094 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjc58\" (UniqueName: \"kubernetes.io/projected/e002b83f-c358-4e16-aba8-6f13c28c0b61-kube-api-access-jjc58\") pod \"calico-kube-controllers-6dccbd5fb7-7mmn5\" (UID: \"e002b83f-c358-4e16-aba8-6f13c28c0b61\") " pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" Oct 28 00:13:35.139038 kubelet[2753]: I1028 00:13:35.138111 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l242l\" (UniqueName: \"kubernetes.io/projected/3e103a9d-d0d4-4b11-9367-559f1c47a552-kube-api-access-l242l\") pod \"calico-apiserver-76b9897cff-8q7s2\" (UID: \"3e103a9d-d0d4-4b11-9367-559f1c47a552\") " pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" Oct 28 00:13:35.139038 kubelet[2753]: I1028 00:13:35.138125 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49dqx\" (UniqueName: \"kubernetes.io/projected/0e5367a5-74db-4442-a121-3a4c264915e4-kube-api-access-49dqx\") pod \"coredns-66bc5c9577-flcqw\" (UID: \"0e5367a5-74db-4442-a121-3a4c264915e4\") " pod="kube-system/coredns-66bc5c9577-flcqw" Oct 28 00:13:35.139983 systemd[1]: Created slice kubepods-besteffort-pod3e103a9d_d0d4_4b11_9367_559f1c47a552.slice - libcontainer container kubepods-besteffort-pod3e103a9d_d0d4_4b11_9367_559f1c47a552.slice. Oct 28 00:13:35.144804 systemd[1]: Created slice kubepods-besteffort-podeb03ae87_19b8_4ccf_ad2d_924a8b3b4421.slice - libcontainer container kubepods-besteffort-podeb03ae87_19b8_4ccf_ad2d_924a8b3b4421.slice. Oct 28 00:13:35.151494 systemd[1]: Created slice kubepods-besteffort-pode002b83f_c358_4e16_aba8_6f13c28c0b61.slice - libcontainer container kubepods-besteffort-pode002b83f_c358_4e16_aba8_6f13c28c0b61.slice. Oct 28 00:13:35.159018 systemd[1]: Created slice kubepods-besteffort-pod02d0f3a2_6615_4333_9168_153cfad8a1a2.slice - libcontainer container kubepods-besteffort-pod02d0f3a2_6615_4333_9168_153cfad8a1a2.slice. Oct 28 00:13:35.413058 kubelet[2753]: E1028 00:13:35.412943 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:35.413785 containerd[1597]: time="2025-10-28T00:13:35.413725874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v8g72,Uid:ffaa584b-c0eb-4855-80a7-bb13ffeca77a,Namespace:kube-system,Attempt:0,}" Oct 28 00:13:35.420926 containerd[1597]: time="2025-10-28T00:13:35.420869573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d55c7c46-5npkm,Uid:9eebb6dc-0fd7-4bdc-8419-21aee039a4fd,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:35.430751 containerd[1597]: time="2025-10-28T00:13:35.430695429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pd76c,Uid:857b2565-c255-4b2a-a804-4d8f469fd36f,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:35.439709 kubelet[2753]: E1028 00:13:35.439667 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:35.441505 containerd[1597]: time="2025-10-28T00:13:35.440799728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-flcqw,Uid:0e5367a5-74db-4442-a121-3a4c264915e4,Namespace:kube-system,Attempt:0,}" Oct 28 00:13:35.444735 containerd[1597]: time="2025-10-28T00:13:35.444568678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b9897cff-8q7s2,Uid:3e103a9d-d0d4-4b11-9367-559f1c47a552,Namespace:calico-apiserver,Attempt:0,}" Oct 28 00:13:35.453328 containerd[1597]: time="2025-10-28T00:13:35.453283648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-29kbz,Uid:eb03ae87-19b8-4ccf-ad2d-924a8b3b4421,Namespace:calico-apiserver,Attempt:0,}" Oct 28 00:13:35.457368 containerd[1597]: time="2025-10-28T00:13:35.457016229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dccbd5fb7-7mmn5,Uid:e002b83f-c358-4e16-aba8-6f13c28c0b61,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:35.465375 containerd[1597]: time="2025-10-28T00:13:35.465318354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-dq2lt,Uid:02d0f3a2-6615-4333-9168-153cfad8a1a2,Namespace:calico-apiserver,Attempt:0,}" Oct 28 00:13:35.555249 containerd[1597]: time="2025-10-28T00:13:35.555193211Z" level=error msg="Failed to destroy network for sandbox \"a6f6d1590b9a27cf64607eaeb61d342a8d1c30991b89d1f91dbb3f6d684d03f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.568525 containerd[1597]: time="2025-10-28T00:13:35.568452827Z" level=error msg="Failed to destroy network for sandbox \"dc499ce7d28ffaab553fb8e43b79f3b0435b7029654f87a771ca188963a89e69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.575644 containerd[1597]: time="2025-10-28T00:13:35.575603649Z" level=error msg="Failed to destroy network for sandbox \"30952666a028421ed5a652342029382c61dc1b732ddd679b03cfc20258486b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.616905 containerd[1597]: time="2025-10-28T00:13:35.616740047Z" level=error msg="Failed to destroy network for sandbox \"37310d048db68badf031de8d855bf469390278484c9814033ea899d8cd836518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.616905 containerd[1597]: time="2025-10-28T00:13:35.616786184Z" level=error msg="Failed to destroy network for sandbox \"05ab6b980ee566d847d74ad81c1ba0f2b63b0391f00c6f73a247195fb7d3ded1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.617225 containerd[1597]: time="2025-10-28T00:13:35.616805230Z" level=error msg="Failed to destroy network for sandbox \"58c8c8d1a55beb590868cb15736d1c2c1dc76a084b706a4cd25ead36c43b315c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.633438 kubelet[2753]: E1028 00:13:35.632658 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:35.640254 containerd[1597]: time="2025-10-28T00:13:35.631384845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d55c7c46-5npkm,Uid:9eebb6dc-0fd7-4bdc-8419-21aee039a4fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f6d1590b9a27cf64607eaeb61d342a8d1c30991b89d1f91dbb3f6d684d03f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640450 containerd[1597]: time="2025-10-28T00:13:35.631430441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-flcqw,Uid:0e5367a5-74db-4442-a121-3a4c264915e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37310d048db68badf031de8d855bf469390278484c9814033ea899d8cd836518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640450 containerd[1597]: time="2025-10-28T00:13:35.631426233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v8g72,Uid:ffaa584b-c0eb-4855-80a7-bb13ffeca77a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc499ce7d28ffaab553fb8e43b79f3b0435b7029654f87a771ca188963a89e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640450 containerd[1597]: time="2025-10-28T00:13:35.631462331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pd76c,Uid:857b2565-c255-4b2a-a804-4d8f469fd36f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30952666a028421ed5a652342029382c61dc1b732ddd679b03cfc20258486b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640642 containerd[1597]: time="2025-10-28T00:13:35.631498570Z" level=error msg="Failed to destroy network for sandbox \"237e8e6562cd795e644863b80d67b90659d3389ef0c571a6389f708d6f023a73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640642 containerd[1597]: time="2025-10-28T00:13:35.616740228Z" level=error msg="Failed to destroy network for sandbox \"b055df9d66f06c995cb730c608304ef3ee822406824e838dd236944178d35b12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640733 containerd[1597]: time="2025-10-28T00:13:35.633576563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b9897cff-8q7s2,Uid:3e103a9d-d0d4-4b11-9367-559f1c47a552,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ab6b980ee566d847d74ad81c1ba0f2b63b0391f00c6f73a247195fb7d3ded1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.640733 containerd[1597]: time="2025-10-28T00:13:35.633620545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 00:13:35.641105 containerd[1597]: time="2025-10-28T00:13:35.635388156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dccbd5fb7-7mmn5,Uid:e002b83f-c358-4e16-aba8-6f13c28c0b61,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8c8d1a55beb590868cb15736d1c2c1dc76a084b706a4cd25ead36c43b315c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.643466 containerd[1597]: time="2025-10-28T00:13:35.643434018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-dq2lt,Uid:02d0f3a2-6615-4333-9168-153cfad8a1a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"237e8e6562cd795e644863b80d67b90659d3389ef0c571a6389f708d6f023a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.644618 containerd[1597]: time="2025-10-28T00:13:35.644576605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-29kbz,Uid:eb03ae87-19b8-4ccf-ad2d-924a8b3b4421,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b055df9d66f06c995cb730c608304ef3ee822406824e838dd236944178d35b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.647598 kubelet[2753]: E1028 00:13:35.647544 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f6d1590b9a27cf64607eaeb61d342a8d1c30991b89d1f91dbb3f6d684d03f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.647686 kubelet[2753]: E1028 00:13:35.647581 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc499ce7d28ffaab553fb8e43b79f3b0435b7029654f87a771ca188963a89e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.647686 kubelet[2753]: E1028 00:13:35.647581 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b055df9d66f06c995cb730c608304ef3ee822406824e838dd236944178d35b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.647686 kubelet[2753]: E1028 00:13:35.647627 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f6d1590b9a27cf64607eaeb61d342a8d1c30991b89d1f91dbb3f6d684d03f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d55c7c46-5npkm" Oct 28 00:13:35.647686 kubelet[2753]: E1028 00:13:35.647640 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc499ce7d28ffaab553fb8e43b79f3b0435b7029654f87a771ca188963a89e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v8g72" Oct 28 00:13:35.647841 kubelet[2753]: E1028 00:13:35.647642 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b055df9d66f06c995cb730c608304ef3ee822406824e838dd236944178d35b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" Oct 28 00:13:35.647841 kubelet[2753]: E1028 00:13:35.647655 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f6d1590b9a27cf64607eaeb61d342a8d1c30991b89d1f91dbb3f6d684d03f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d55c7c46-5npkm" Oct 28 00:13:35.647841 kubelet[2753]: E1028 00:13:35.647663 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b055df9d66f06c995cb730c608304ef3ee822406824e838dd236944178d35b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" Oct 28 00:13:35.647841 kubelet[2753]: E1028 00:13:35.647683 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37310d048db68badf031de8d855bf469390278484c9814033ea899d8cd836518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.647987 kubelet[2753]: E1028 00:13:35.647697 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37310d048db68badf031de8d855bf469390278484c9814033ea899d8cd836518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-flcqw" Oct 28 00:13:35.647987 kubelet[2753]: E1028 00:13:35.647709 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37310d048db68badf031de8d855bf469390278484c9814033ea899d8cd836518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-flcqw" Oct 28 00:13:35.647987 kubelet[2753]: E1028 00:13:35.647728 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c466c9c-29kbz_calico-apiserver(eb03ae87-19b8-4ccf-ad2d-924a8b3b4421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c466c9c-29kbz_calico-apiserver(eb03ae87-19b8-4ccf-ad2d-924a8b3b4421)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b055df9d66f06c995cb730c608304ef3ee822406824e838dd236944178d35b12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" podUID="eb03ae87-19b8-4ccf-ad2d-924a8b3b4421" Oct 28 00:13:35.648116 kubelet[2753]: E1028 00:13:35.647741 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-flcqw_kube-system(0e5367a5-74db-4442-a121-3a4c264915e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-flcqw_kube-system(0e5367a5-74db-4442-a121-3a4c264915e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37310d048db68badf031de8d855bf469390278484c9814033ea899d8cd836518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-flcqw" podUID="0e5367a5-74db-4442-a121-3a4c264915e4" Oct 28 00:13:35.648116 kubelet[2753]: E1028 00:13:35.647728 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58d55c7c46-5npkm_calico-system(9eebb6dc-0fd7-4bdc-8419-21aee039a4fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58d55c7c46-5npkm_calico-system(9eebb6dc-0fd7-4bdc-8419-21aee039a4fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6f6d1590b9a27cf64607eaeb61d342a8d1c30991b89d1f91dbb3f6d684d03f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58d55c7c46-5npkm" podUID="9eebb6dc-0fd7-4bdc-8419-21aee039a4fd" Oct 28 00:13:35.648116 kubelet[2753]: E1028 00:13:35.647661 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc499ce7d28ffaab553fb8e43b79f3b0435b7029654f87a771ca188963a89e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v8g72" Oct 28 00:13:35.648213 kubelet[2753]: E1028 00:13:35.647777 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30952666a028421ed5a652342029382c61dc1b732ddd679b03cfc20258486b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.648213 kubelet[2753]: E1028 00:13:35.647787 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-v8g72_kube-system(ffaa584b-c0eb-4855-80a7-bb13ffeca77a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-v8g72_kube-system(ffaa584b-c0eb-4855-80a7-bb13ffeca77a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc499ce7d28ffaab553fb8e43b79f3b0435b7029654f87a771ca188963a89e69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v8g72" podUID="ffaa584b-c0eb-4855-80a7-bb13ffeca77a" Oct 28 00:13:35.648213 kubelet[2753]: E1028 00:13:35.647801 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30952666a028421ed5a652342029382c61dc1b732ddd679b03cfc20258486b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pd76c" Oct 28 00:13:35.648213 kubelet[2753]: E1028 00:13:35.647817 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8c8d1a55beb590868cb15736d1c2c1dc76a084b706a4cd25ead36c43b315c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.648358 kubelet[2753]: E1028 00:13:35.647821 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30952666a028421ed5a652342029382c61dc1b732ddd679b03cfc20258486b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pd76c" Oct 28 00:13:35.648358 kubelet[2753]: E1028 00:13:35.647832 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8c8d1a55beb590868cb15736d1c2c1dc76a084b706a4cd25ead36c43b315c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" Oct 28 00:13:35.648358 kubelet[2753]: E1028 00:13:35.647843 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8c8d1a55beb590868cb15736d1c2c1dc76a084b706a4cd25ead36c43b315c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" Oct 28 00:13:35.648463 kubelet[2753]: E1028 00:13:35.647855 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pd76c_calico-system(857b2565-c255-4b2a-a804-4d8f469fd36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pd76c_calico-system(857b2565-c255-4b2a-a804-4d8f469fd36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30952666a028421ed5a652342029382c61dc1b732ddd679b03cfc20258486b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pd76c" podUID="857b2565-c255-4b2a-a804-4d8f469fd36f" Oct 28 00:13:35.648463 kubelet[2753]: E1028 00:13:35.647864 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dccbd5fb7-7mmn5_calico-system(e002b83f-c358-4e16-aba8-6f13c28c0b61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dccbd5fb7-7mmn5_calico-system(e002b83f-c358-4e16-aba8-6f13c28c0b61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58c8c8d1a55beb590868cb15736d1c2c1dc76a084b706a4cd25ead36c43b315c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:13:35.648463 kubelet[2753]: E1028 00:13:35.647886 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ab6b980ee566d847d74ad81c1ba0f2b63b0391f00c6f73a247195fb7d3ded1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.648639 kubelet[2753]: E1028 00:13:35.647893 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237e8e6562cd795e644863b80d67b90659d3389ef0c571a6389f708d6f023a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:35.648639 kubelet[2753]: E1028 00:13:35.647899 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ab6b980ee566d847d74ad81c1ba0f2b63b0391f00c6f73a247195fb7d3ded1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" Oct 28 00:13:35.648639 kubelet[2753]: E1028 00:13:35.647911 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ab6b980ee566d847d74ad81c1ba0f2b63b0391f00c6f73a247195fb7d3ded1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" Oct 28 00:13:35.648639 kubelet[2753]: E1028 00:13:35.647915 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237e8e6562cd795e644863b80d67b90659d3389ef0c571a6389f708d6f023a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" Oct 28 00:13:35.648763 kubelet[2753]: E1028 00:13:35.647934 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237e8e6562cd795e644863b80d67b90659d3389ef0c571a6389f708d6f023a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" Oct 28 00:13:35.648763 kubelet[2753]: E1028 00:13:35.648012 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c466c9c-dq2lt_calico-apiserver(02d0f3a2-6615-4333-9168-153cfad8a1a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c466c9c-dq2lt_calico-apiserver(02d0f3a2-6615-4333-9168-153cfad8a1a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"237e8e6562cd795e644863b80d67b90659d3389ef0c571a6389f708d6f023a73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" podUID="02d0f3a2-6615-4333-9168-153cfad8a1a2" Oct 28 00:13:35.648851 kubelet[2753]: E1028 00:13:35.647937 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76b9897cff-8q7s2_calico-apiserver(3e103a9d-d0d4-4b11-9367-559f1c47a552)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76b9897cff-8q7s2_calico-apiserver(3e103a9d-d0d4-4b11-9367-559f1c47a552)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05ab6b980ee566d847d74ad81c1ba0f2b63b0391f00c6f73a247195fb7d3ded1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" podUID="3e103a9d-d0d4-4b11-9367-559f1c47a552" Oct 28 00:13:36.824708 systemd[1]: Created slice kubepods-besteffort-pod9cf7db7c_cf1f_40f0_bd37_4896435636ad.slice - libcontainer container kubepods-besteffort-pod9cf7db7c_cf1f_40f0_bd37_4896435636ad.slice. Oct 28 00:13:36.829690 containerd[1597]: time="2025-10-28T00:13:36.829628750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgknx,Uid:9cf7db7c-cf1f-40f0-bd37-4896435636ad,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:36.884322 containerd[1597]: time="2025-10-28T00:13:36.884245810Z" level=error msg="Failed to destroy network for sandbox \"6541e906b569db25a1ebd1ddff6afdc3d8d69c4b958cdc88959fae7b84934d31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:36.885933 containerd[1597]: time="2025-10-28T00:13:36.885872114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgknx,Uid:9cf7db7c-cf1f-40f0-bd37-4896435636ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541e906b569db25a1ebd1ddff6afdc3d8d69c4b958cdc88959fae7b84934d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:36.886436 kubelet[2753]: E1028 00:13:36.886155 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541e906b569db25a1ebd1ddff6afdc3d8d69c4b958cdc88959fae7b84934d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 00:13:36.886436 kubelet[2753]: E1028 00:13:36.886228 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541e906b569db25a1ebd1ddff6afdc3d8d69c4b958cdc88959fae7b84934d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:36.886436 kubelet[2753]: E1028 00:13:36.886258 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541e906b569db25a1ebd1ddff6afdc3d8d69c4b958cdc88959fae7b84934d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgknx" Oct 28 00:13:36.886841 kubelet[2753]: E1028 00:13:36.886325 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6541e906b569db25a1ebd1ddff6afdc3d8d69c4b958cdc88959fae7b84934d31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:36.887018 systemd[1]: run-netns-cni\x2d8ed06a12\x2d1307\x2d4020\x2d3706\x2dc65ab3ad81df.mount: Deactivated successfully. Oct 28 00:13:42.450286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958390100.mount: Deactivated successfully. Oct 28 00:13:42.825259 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:47636.service - OpenSSH per-connection server daemon (10.0.0.1:47636). Oct 28 00:13:42.925389 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 47636 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:13:42.927175 sshd-session[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:13:42.932147 systemd-logind[1578]: New session 8 of user core. Oct 28 00:13:42.941557 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 00:13:43.153098 sshd[3828]: Connection closed by 10.0.0.1 port 47636 Oct 28 00:13:43.153446 sshd-session[3825]: pam_unix(sshd:session): session closed for user core Oct 28 00:13:43.158108 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:47636.service: Deactivated successfully. Oct 28 00:13:43.160112 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 00:13:43.160939 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Oct 28 00:13:43.162180 systemd-logind[1578]: Removed session 8. Oct 28 00:13:43.707869 containerd[1597]: time="2025-10-28T00:13:43.707806115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:43.710002 containerd[1597]: time="2025-10-28T00:13:43.709919922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 28 00:13:43.710912 containerd[1597]: time="2025-10-28T00:13:43.710866008Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:43.714506 containerd[1597]: time="2025-10-28T00:13:43.713931563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 00:13:43.714506 containerd[1597]: time="2025-10-28T00:13:43.714348264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.073581516s" Oct 28 00:13:43.714506 containerd[1597]: time="2025-10-28T00:13:43.714376347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 28 00:13:43.737712 containerd[1597]: time="2025-10-28T00:13:43.737642999Z" level=info msg="CreateContainer within sandbox \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 00:13:43.747299 containerd[1597]: time="2025-10-28T00:13:43.747243148Z" level=info msg="Container 39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:43.766952 containerd[1597]: time="2025-10-28T00:13:43.766889615Z" level=info msg="CreateContainer within sandbox \"06fa37942ad8fa2014c31d95a3f046ec60149c06c36a4d3139c940475c1a098f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249\"" Oct 28 00:13:43.767630 containerd[1597]: time="2025-10-28T00:13:43.767580061Z" level=info msg="StartContainer for \"39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249\"" Oct 28 00:13:43.769078 containerd[1597]: time="2025-10-28T00:13:43.769028199Z" level=info msg="connecting to shim 39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249" address="unix:///run/containerd/s/fbde84ffc553359da1ac3a8f40f9d1550d3d87be582f5c54afd192a0b9165c2a" protocol=ttrpc version=3 Oct 28 00:13:43.794535 systemd[1]: Started cri-containerd-39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249.scope - libcontainer container 39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249. Oct 28 00:13:43.846751 containerd[1597]: time="2025-10-28T00:13:43.846693998Z" level=info msg="StartContainer for \"39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249\" returns successfully" Oct 28 00:13:43.922980 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 00:13:43.923112 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 00:13:44.101947 kubelet[2753]: I1028 00:13:44.101906 2753 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-backend-key-pair\") pod \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\" (UID: \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\") " Oct 28 00:13:44.101947 kubelet[2753]: I1028 00:13:44.101961 2753 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrxh\" (UniqueName: \"kubernetes.io/projected/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-kube-api-access-dfrxh\") pod \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\" (UID: \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\") " Oct 28 00:13:44.101947 kubelet[2753]: I1028 00:13:44.101981 2753 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-ca-bundle\") pod \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\" (UID: \"9eebb6dc-0fd7-4bdc-8419-21aee039a4fd\") " Oct 28 00:13:44.102965 kubelet[2753]: I1028 00:13:44.102508 2753 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9eebb6dc-0fd7-4bdc-8419-21aee039a4fd" (UID: "9eebb6dc-0fd7-4bdc-8419-21aee039a4fd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 00:13:44.106102 kubelet[2753]: I1028 00:13:44.106050 2753 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9eebb6dc-0fd7-4bdc-8419-21aee039a4fd" (UID: "9eebb6dc-0fd7-4bdc-8419-21aee039a4fd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 00:13:44.106800 kubelet[2753]: I1028 00:13:44.106763 2753 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-kube-api-access-dfrxh" (OuterVolumeSpecName: "kube-api-access-dfrxh") pod "9eebb6dc-0fd7-4bdc-8419-21aee039a4fd" (UID: "9eebb6dc-0fd7-4bdc-8419-21aee039a4fd"). InnerVolumeSpecName "kube-api-access-dfrxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 00:13:44.203143 kubelet[2753]: I1028 00:13:44.203092 2753 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 00:13:44.203143 kubelet[2753]: I1028 00:13:44.203124 2753 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dfrxh\" (UniqueName: \"kubernetes.io/projected/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-kube-api-access-dfrxh\") on node \"localhost\" DevicePath \"\"" Oct 28 00:13:44.203143 kubelet[2753]: I1028 00:13:44.203137 2753 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 00:13:44.665198 kubelet[2753]: E1028 00:13:44.665158 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:44.671690 systemd[1]: Removed slice kubepods-besteffort-pod9eebb6dc_0fd7_4bdc_8419_21aee039a4fd.slice - libcontainer container kubepods-besteffort-pod9eebb6dc_0fd7_4bdc_8419_21aee039a4fd.slice. Oct 28 00:13:44.681041 kubelet[2753]: I1028 00:13:44.680952 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nm6p8" podStartSLOduration=2.164453449 podStartE2EDuration="21.680924035s" podCreationTimestamp="2025-10-28 00:13:23 +0000 UTC" firstStartedPulling="2025-10-28 00:13:24.198684783 +0000 UTC m=+24.476316388" lastFinishedPulling="2025-10-28 00:13:43.71515537 +0000 UTC m=+43.992786974" observedRunningTime="2025-10-28 00:13:44.680913165 +0000 UTC m=+44.958544769" watchObservedRunningTime="2025-10-28 00:13:44.680924035 +0000 UTC m=+44.958555649" Oct 28 00:13:44.721204 systemd[1]: var-lib-kubelet-pods-9eebb6dc\x2d0fd7\x2d4bdc\x2d8419\x2d21aee039a4fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfrxh.mount: Deactivated successfully. Oct 28 00:13:44.721360 systemd[1]: var-lib-kubelet-pods-9eebb6dc\x2d0fd7\x2d4bdc\x2d8419\x2d21aee039a4fd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 00:13:44.742968 systemd[1]: Created slice kubepods-besteffort-pod8f40531a_a7a8_40d1_9aaf_cd96278fb41e.slice - libcontainer container kubepods-besteffort-pod8f40531a_a7a8_40d1_9aaf_cd96278fb41e.slice. Oct 28 00:13:44.807456 kubelet[2753]: I1028 00:13:44.807398 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f40531a-a7a8-40d1-9aaf-cd96278fb41e-whisker-ca-bundle\") pod \"whisker-7df4f7cbc6-jjqdv\" (UID: \"8f40531a-a7a8-40d1-9aaf-cd96278fb41e\") " pod="calico-system/whisker-7df4f7cbc6-jjqdv" Oct 28 00:13:44.807660 kubelet[2753]: I1028 00:13:44.807634 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f40531a-a7a8-40d1-9aaf-cd96278fb41e-whisker-backend-key-pair\") pod \"whisker-7df4f7cbc6-jjqdv\" (UID: \"8f40531a-a7a8-40d1-9aaf-cd96278fb41e\") " pod="calico-system/whisker-7df4f7cbc6-jjqdv" Oct 28 00:13:44.807660 kubelet[2753]: I1028 00:13:44.807657 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6znrt\" (UniqueName: \"kubernetes.io/projected/8f40531a-a7a8-40d1-9aaf-cd96278fb41e-kube-api-access-6znrt\") pod \"whisker-7df4f7cbc6-jjqdv\" (UID: \"8f40531a-a7a8-40d1-9aaf-cd96278fb41e\") " pod="calico-system/whisker-7df4f7cbc6-jjqdv" Oct 28 00:13:45.050794 containerd[1597]: time="2025-10-28T00:13:45.050658835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7df4f7cbc6-jjqdv,Uid:8f40531a-a7a8-40d1-9aaf-cd96278fb41e,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:45.245951 systemd-networkd[1499]: calid479c121c33: Link UP Oct 28 00:13:45.247166 systemd-networkd[1499]: calid479c121c33: Gained carrier Oct 28 00:13:45.275519 containerd[1597]: 2025-10-28 00:13:45.075 [INFO][3911] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 00:13:45.275519 containerd[1597]: 2025-10-28 00:13:45.094 [INFO][3911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0 whisker-7df4f7cbc6- calico-system 8f40531a-a7a8-40d1-9aaf-cd96278fb41e 972 0 2025-10-28 00:13:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7df4f7cbc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7df4f7cbc6-jjqdv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid479c121c33 [] [] }} ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-" Oct 28 00:13:45.275519 containerd[1597]: 2025-10-28 00:13:45.094 [INFO][3911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.275519 containerd[1597]: 2025-10-28 00:13:45.167 [INFO][3926] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" HandleID="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Workload="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.168 [INFO][3926] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" HandleID="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Workload="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b1420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7df4f7cbc6-jjqdv", "timestamp":"2025-10-28 00:13:45.16771908 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.168 [INFO][3926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.169 [INFO][3926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.169 [INFO][3926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.184 [INFO][3926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" host="localhost" Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.196 [INFO][3926] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.201 [INFO][3926] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.204 [INFO][3926] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.208 [INFO][3926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:45.275787 containerd[1597]: 2025-10-28 00:13:45.209 [INFO][3926] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" host="localhost" Oct 28 00:13:45.276022 containerd[1597]: 2025-10-28 00:13:45.212 [INFO][3926] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2 Oct 28 00:13:45.276022 containerd[1597]: 2025-10-28 00:13:45.217 [INFO][3926] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" host="localhost" Oct 28 00:13:45.276022 containerd[1597]: 2025-10-28 00:13:45.223 [INFO][3926] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" host="localhost" Oct 28 00:13:45.276022 containerd[1597]: 2025-10-28 00:13:45.224 [INFO][3926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" host="localhost" Oct 28 00:13:45.276022 containerd[1597]: 2025-10-28 00:13:45.224 [INFO][3926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:45.276022 containerd[1597]: 2025-10-28 00:13:45.224 [INFO][3926] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" HandleID="k8s-pod-network.d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Workload="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.276271 containerd[1597]: 2025-10-28 00:13:45.231 [INFO][3911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0", GenerateName:"whisker-7df4f7cbc6-", Namespace:"calico-system", SelfLink:"", UID:"8f40531a-a7a8-40d1-9aaf-cd96278fb41e", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7df4f7cbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7df4f7cbc6-jjqdv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid479c121c33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:45.276271 containerd[1597]: 2025-10-28 00:13:45.231 [INFO][3911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.276369 containerd[1597]: 2025-10-28 00:13:45.231 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid479c121c33 ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.276369 containerd[1597]: 2025-10-28 00:13:45.245 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.276406 containerd[1597]: 2025-10-28 00:13:45.246 [INFO][3911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0", GenerateName:"whisker-7df4f7cbc6-", Namespace:"calico-system", SelfLink:"", UID:"8f40531a-a7a8-40d1-9aaf-cd96278fb41e", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7df4f7cbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2", Pod:"whisker-7df4f7cbc6-jjqdv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid479c121c33", MAC:"e6:2b:7a:60:f0:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:45.276484 containerd[1597]: 2025-10-28 00:13:45.258 [INFO][3911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" Namespace="calico-system" Pod="whisker-7df4f7cbc6-jjqdv" WorkloadEndpoint="localhost-k8s-whisker--7df4f7cbc6--jjqdv-eth0" Oct 28 00:13:45.736257 systemd-networkd[1499]: vxlan.calico: Link UP Oct 28 00:13:45.736267 systemd-networkd[1499]: vxlan.calico: Gained carrier Oct 28 00:13:45.822179 kubelet[2753]: I1028 00:13:45.822127 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eebb6dc-0fd7-4bdc-8419-21aee039a4fd" path="/var/lib/kubelet/pods/9eebb6dc-0fd7-4bdc-8419-21aee039a4fd/volumes" Oct 28 00:13:46.431681 systemd-networkd[1499]: calid479c121c33: Gained IPv6LL Oct 28 00:13:46.628932 containerd[1597]: time="2025-10-28T00:13:46.628862908Z" level=info msg="connecting to shim d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2" address="unix:///run/containerd/s/87b350656409a47fe93b7e917be1100254660aa2c16aa810e9af6d902ee0ddcf" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:46.657563 systemd[1]: Started cri-containerd-d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2.scope - libcontainer container d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2. Oct 28 00:13:46.673958 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:46.710449 containerd[1597]: time="2025-10-28T00:13:46.710315606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7df4f7cbc6-jjqdv,Uid:8f40531a-a7a8-40d1-9aaf-cd96278fb41e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6168518fdba9ba835b5150960822632aa2c60cf75dcf13c2f956634587009e2\"" Oct 28 00:13:46.712518 containerd[1597]: time="2025-10-28T00:13:46.712490859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 00:13:47.096719 containerd[1597]: time="2025-10-28T00:13:47.096656944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:47.097993 containerd[1597]: time="2025-10-28T00:13:47.097954358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 00:13:47.102718 containerd[1597]: time="2025-10-28T00:13:47.102618261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 00:13:47.102980 kubelet[2753]: E1028 00:13:47.102923 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 00:13:47.103326 kubelet[2753]: E1028 00:13:47.102989 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 00:13:47.103326 kubelet[2753]: E1028 00:13:47.103098 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7df4f7cbc6-jjqdv_calico-system(8f40531a-a7a8-40d1-9aaf-cd96278fb41e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:47.103737 containerd[1597]: time="2025-10-28T00:13:47.103693408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 00:13:47.199560 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Oct 28 00:13:47.467603 containerd[1597]: time="2025-10-28T00:13:47.467386856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:47.619561 containerd[1597]: time="2025-10-28T00:13:47.619478564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 00:13:47.619561 containerd[1597]: time="2025-10-28T00:13:47.619489314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 00:13:47.619885 kubelet[2753]: E1028 00:13:47.619840 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 00:13:47.619971 kubelet[2753]: E1028 00:13:47.619897 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 00:13:47.620066 kubelet[2753]: E1028 00:13:47.619998 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7df4f7cbc6-jjqdv_calico-system(8f40531a-a7a8-40d1-9aaf-cd96278fb41e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:47.620166 kubelet[2753]: E1028 00:13:47.620052 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7df4f7cbc6-jjqdv" podUID="8f40531a-a7a8-40d1-9aaf-cd96278fb41e" Oct 28 00:13:47.673603 kubelet[2753]: E1028 00:13:47.673511 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7df4f7cbc6-jjqdv" podUID="8f40531a-a7a8-40d1-9aaf-cd96278fb41e" Oct 28 00:13:47.881720 containerd[1597]: time="2025-10-28T00:13:47.881656633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pd76c,Uid:857b2565-c255-4b2a-a804-4d8f469fd36f,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:47.957205 containerd[1597]: time="2025-10-28T00:13:47.957037479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dccbd5fb7-7mmn5,Uid:e002b83f-c358-4e16-aba8-6f13c28c0b61,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:48.069129 containerd[1597]: time="2025-10-28T00:13:48.069083513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-29kbz,Uid:eb03ae87-19b8-4ccf-ad2d-924a8b3b4421,Namespace:calico-apiserver,Attempt:0,}" Oct 28 00:13:48.168129 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:51872.service - OpenSSH per-connection server daemon (10.0.0.1:51872). Oct 28 00:13:48.353663 systemd-networkd[1499]: cali2ef4706d867: Link UP Oct 28 00:13:48.354494 systemd-networkd[1499]: cali2ef4706d867: Gained carrier Oct 28 00:13:48.384867 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 51872 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:13:48.387475 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:13:48.388158 containerd[1597]: 2025-10-28 00:13:48.253 [INFO][4189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--pd76c-eth0 goldmane-7c778bb748- calico-system 857b2565-c255-4b2a-a804-4d8f469fd36f 862 0 2025-10-28 00:13:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-pd76c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2ef4706d867 [] [] }} ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-" Oct 28 00:13:48.388158 containerd[1597]: 2025-10-28 00:13:48.254 [INFO][4189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.388158 containerd[1597]: 2025-10-28 00:13:48.288 [INFO][4208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" HandleID="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Workload="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.288 [INFO][4208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" HandleID="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Workload="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-pd76c", "timestamp":"2025-10-28 00:13:48.288322633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.288 [INFO][4208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.288 [INFO][4208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.288 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.304 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" host="localhost" Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.308 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.312 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.313 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.315 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:48.388547 containerd[1597]: 2025-10-28 00:13:48.315 [INFO][4208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" host="localhost" Oct 28 00:13:48.388997 containerd[1597]: 2025-10-28 00:13:48.316 [INFO][4208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7 Oct 28 00:13:48.388997 containerd[1597]: 2025-10-28 00:13:48.335 [INFO][4208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" host="localhost" Oct 28 00:13:48.388997 containerd[1597]: 2025-10-28 00:13:48.347 [INFO][4208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" host="localhost" Oct 28 00:13:48.388997 containerd[1597]: 2025-10-28 00:13:48.347 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" host="localhost" Oct 28 00:13:48.388997 containerd[1597]: 2025-10-28 00:13:48.347 [INFO][4208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:48.388997 containerd[1597]: 2025-10-28 00:13:48.347 [INFO][4208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" HandleID="k8s-pod-network.667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Workload="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.389283 containerd[1597]: 2025-10-28 00:13:48.350 [INFO][4189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pd76c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"857b2565-c255-4b2a-a804-4d8f469fd36f", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-pd76c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2ef4706d867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:48.389283 containerd[1597]: 2025-10-28 00:13:48.350 [INFO][4189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.389371 containerd[1597]: 2025-10-28 00:13:48.351 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ef4706d867 ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.389371 containerd[1597]: 2025-10-28 00:13:48.354 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.389463 containerd[1597]: 2025-10-28 00:13:48.355 [INFO][4189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pd76c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"857b2565-c255-4b2a-a804-4d8f469fd36f", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7", Pod:"goldmane-7c778bb748-pd76c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2ef4706d867", MAC:"ea:a5:30:20:1c:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:48.389534 containerd[1597]: 2025-10-28 00:13:48.383 [INFO][4189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" Namespace="calico-system" Pod="goldmane-7c778bb748-pd76c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pd76c-eth0" Oct 28 00:13:48.393597 systemd-logind[1578]: New session 9 of user core. Oct 28 00:13:48.403590 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 00:13:48.631057 sshd[4228]: Connection closed by 10.0.0.1 port 51872 Oct 28 00:13:48.631385 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Oct 28 00:13:48.635933 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:51872.service: Deactivated successfully. Oct 28 00:13:48.638074 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 00:13:48.638968 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Oct 28 00:13:48.640073 systemd-logind[1578]: Removed session 9. Oct 28 00:13:48.675129 kubelet[2753]: E1028 00:13:48.675055 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7df4f7cbc6-jjqdv" podUID="8f40531a-a7a8-40d1-9aaf-cd96278fb41e" Oct 28 00:13:48.984156 containerd[1597]: time="2025-10-28T00:13:48.983966785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-dq2lt,Uid:02d0f3a2-6615-4333-9168-153cfad8a1a2,Namespace:calico-apiserver,Attempt:0,}" Oct 28 00:13:49.023500 containerd[1597]: time="2025-10-28T00:13:49.022069962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b9897cff-8q7s2,Uid:3e103a9d-d0d4-4b11-9367-559f1c47a552,Namespace:calico-apiserver,Attempt:0,}" Oct 28 00:13:49.079265 containerd[1597]: time="2025-10-28T00:13:49.079198325Z" level=info msg="connecting to shim 667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7" address="unix:///run/containerd/s/84aebbb510654b77357c55c030d31a847f69e6fcd3489300a99affcf3702d939" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:49.128659 systemd[1]: Started cri-containerd-667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7.scope - libcontainer container 667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7. Oct 28 00:13:49.145048 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:49.458020 containerd[1597]: time="2025-10-28T00:13:49.457910796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pd76c,Uid:857b2565-c255-4b2a-a804-4d8f469fd36f,Namespace:calico-system,Attempt:0,} returns sandbox id \"667977066c0c699c70ac771bcd57b67c66c2ec9ea26c4e66d93834d8162d60d7\"" Oct 28 00:13:49.460002 containerd[1597]: time="2025-10-28T00:13:49.459959520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 00:13:49.553700 systemd-networkd[1499]: cali09e4a65a91d: Link UP Oct 28 00:13:49.555022 systemd-networkd[1499]: cali09e4a65a91d: Gained carrier Oct 28 00:13:49.578642 containerd[1597]: 2025-10-28 00:13:48.986 [INFO][4244] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0 calico-kube-controllers-6dccbd5fb7- calico-system e002b83f-c358-4e16-aba8-6f13c28c0b61 868 0 2025-10-28 00:13:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dccbd5fb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6dccbd5fb7-7mmn5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali09e4a65a91d [] [] }} ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-" Oct 28 00:13:49.578642 containerd[1597]: 2025-10-28 00:13:48.986 [INFO][4244] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.578642 containerd[1597]: 2025-10-28 00:13:49.078 [INFO][4268] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" HandleID="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Workload="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.079 [INFO][4268] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" HandleID="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Workload="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000185de0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6dccbd5fb7-7mmn5", "timestamp":"2025-10-28 00:13:49.078857045 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.079 [INFO][4268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.079 [INFO][4268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.079 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.097 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" host="localhost" Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.108 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.116 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.119 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.238 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:49.578883 containerd[1597]: 2025-10-28 00:13:49.238 [INFO][4268] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" host="localhost" Oct 28 00:13:49.579140 containerd[1597]: 2025-10-28 00:13:49.282 [INFO][4268] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806 Oct 28 00:13:49.579140 containerd[1597]: 2025-10-28 00:13:49.481 [INFO][4268] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" host="localhost" Oct 28 00:13:49.579140 containerd[1597]: 2025-10-28 00:13:49.546 [INFO][4268] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" host="localhost" Oct 28 00:13:49.579140 containerd[1597]: 2025-10-28 00:13:49.546 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" host="localhost" Oct 28 00:13:49.579140 containerd[1597]: 2025-10-28 00:13:49.546 [INFO][4268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:49.579140 containerd[1597]: 2025-10-28 00:13:49.546 [INFO][4268] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" HandleID="k8s-pod-network.251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Workload="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.579295 containerd[1597]: 2025-10-28 00:13:49.550 [INFO][4244] cni-plugin/k8s.go 418: Populated endpoint ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0", GenerateName:"calico-kube-controllers-6dccbd5fb7-", Namespace:"calico-system", SelfLink:"", UID:"e002b83f-c358-4e16-aba8-6f13c28c0b61", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dccbd5fb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6dccbd5fb7-7mmn5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09e4a65a91d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:49.579358 containerd[1597]: 2025-10-28 00:13:49.550 [INFO][4244] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.579358 containerd[1597]: 2025-10-28 00:13:49.550 [INFO][4244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09e4a65a91d ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.579358 containerd[1597]: 2025-10-28 00:13:49.556 [INFO][4244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.579560 containerd[1597]: 2025-10-28 00:13:49.559 [INFO][4244] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0", GenerateName:"calico-kube-controllers-6dccbd5fb7-", Namespace:"calico-system", SelfLink:"", UID:"e002b83f-c358-4e16-aba8-6f13c28c0b61", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dccbd5fb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806", Pod:"calico-kube-controllers-6dccbd5fb7-7mmn5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09e4a65a91d", MAC:"62:be:9e:bf:96:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:49.579623 containerd[1597]: 2025-10-28 00:13:49.572 [INFO][4244] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" Namespace="calico-system" Pod="calico-kube-controllers-6dccbd5fb7-7mmn5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dccbd5fb7--7mmn5-eth0" Oct 28 00:13:49.618972 containerd[1597]: time="2025-10-28T00:13:49.618918625Z" level=info msg="connecting to shim 251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806" address="unix:///run/containerd/s/d602397c94c9f7cecb8b59d337b81b06cd6b4439f9c794e01df27f8b8b281d7e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:49.621083 systemd-networkd[1499]: cali19e5b39ebd8: Link UP Oct 28 00:13:49.624606 systemd-networkd[1499]: cali19e5b39ebd8: Gained carrier Oct 28 00:13:49.660220 containerd[1597]: 2025-10-28 00:13:49.075 [INFO][4257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0 calico-apiserver-9c466c9c- calico-apiserver eb03ae87-19b8-4ccf-ad2d-924a8b3b4421 864 0 2025-10-28 00:13:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c466c9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9c466c9c-29kbz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19e5b39ebd8 [] [] }} ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-" Oct 28 00:13:49.660220 containerd[1597]: 2025-10-28 00:13:49.075 [INFO][4257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.660220 containerd[1597]: 2025-10-28 00:13:49.124 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" HandleID="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Workload="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.124 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" HandleID="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Workload="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9c466c9c-29kbz", "timestamp":"2025-10-28 00:13:49.123999339 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.124 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.546 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.547 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.558 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" host="localhost" Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.569 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.578 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.585 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.589 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:49.660498 containerd[1597]: 2025-10-28 00:13:49.589 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" host="localhost" Oct 28 00:13:49.660741 containerd[1597]: 2025-10-28 00:13:49.591 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d Oct 28 00:13:49.660741 containerd[1597]: 2025-10-28 00:13:49.596 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" host="localhost" Oct 28 00:13:49.660741 containerd[1597]: 2025-10-28 00:13:49.605 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" host="localhost" Oct 28 00:13:49.660741 containerd[1597]: 2025-10-28 00:13:49.605 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" host="localhost" Oct 28 00:13:49.660741 containerd[1597]: 2025-10-28 00:13:49.605 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:49.660741 containerd[1597]: 2025-10-28 00:13:49.605 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" HandleID="k8s-pod-network.41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Workload="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.660881 containerd[1597]: 2025-10-28 00:13:49.615 [INFO][4257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0", GenerateName:"calico-apiserver-9c466c9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb03ae87-19b8-4ccf-ad2d-924a8b3b4421", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c466c9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9c466c9c-29kbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19e5b39ebd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:49.660940 containerd[1597]: 2025-10-28 00:13:49.617 [INFO][4257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.660940 containerd[1597]: 2025-10-28 00:13:49.617 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19e5b39ebd8 ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.660940 containerd[1597]: 2025-10-28 00:13:49.624 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.661033 containerd[1597]: 2025-10-28 00:13:49.628 [INFO][4257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0", GenerateName:"calico-apiserver-9c466c9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb03ae87-19b8-4ccf-ad2d-924a8b3b4421", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c466c9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d", Pod:"calico-apiserver-9c466c9c-29kbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19e5b39ebd8", MAC:"0e:a8:fb:be:d8:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:49.661094 containerd[1597]: 2025-10-28 00:13:49.650 [INFO][4257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-29kbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--29kbz-eth0" Oct 28 00:13:49.674718 systemd[1]: Started cri-containerd-251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806.scope - libcontainer container 251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806. Oct 28 00:13:49.699083 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:49.759607 systemd-networkd[1499]: cali2ef4706d867: Gained IPv6LL Oct 28 00:13:49.918896 containerd[1597]: time="2025-10-28T00:13:49.918840826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dccbd5fb7-7mmn5,Uid:e002b83f-c358-4e16-aba8-6f13c28c0b61,Namespace:calico-system,Attempt:0,} returns sandbox id \"251a7f77a8ef8de5fee7ce4967f76fa1bc8900b7a8d8bf8546bc7578b1aea806\"" Oct 28 00:13:49.941511 containerd[1597]: time="2025-10-28T00:13:49.941457243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:50.072096 containerd[1597]: time="2025-10-28T00:13:50.071575195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 00:13:50.072734 containerd[1597]: time="2025-10-28T00:13:50.071817710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 00:13:50.072923 kubelet[2753]: E1028 00:13:50.072869 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 00:13:50.073317 kubelet[2753]: E1028 00:13:50.072935 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 00:13:50.073361 kubelet[2753]: E1028 00:13:50.073299 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pd76c_calico-system(857b2565-c255-4b2a-a804-4d8f469fd36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:50.073390 containerd[1597]: time="2025-10-28T00:13:50.073316292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 00:13:50.073444 kubelet[2753]: E1028 00:13:50.073368 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pd76c" podUID="857b2565-c255-4b2a-a804-4d8f469fd36f" Oct 28 00:13:50.126551 systemd-networkd[1499]: cali4c262b8a09b: Link UP Oct 28 00:13:50.128258 systemd-networkd[1499]: cali4c262b8a09b: Gained carrier Oct 28 00:13:50.205837 containerd[1597]: time="2025-10-28T00:13:50.205786593Z" level=info msg="connecting to shim 41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d" address="unix:///run/containerd/s/0e5beccb333d5eaf7255c15a1cf14799da043248141eca8b9aa51a4c234ccabe" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:50.231579 systemd[1]: Started cri-containerd-41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d.scope - libcontainer container 41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d. Oct 28 00:13:50.246345 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:50.261181 containerd[1597]: 2025-10-28 00:13:49.108 [INFO][4277] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0 calico-apiserver-9c466c9c- calico-apiserver 02d0f3a2-6615-4333-9168-153cfad8a1a2 867 0 2025-10-28 00:13:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c466c9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9c466c9c-dq2lt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c262b8a09b [] [] }} ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-" Oct 28 00:13:50.261181 containerd[1597]: 2025-10-28 00:13:49.111 [INFO][4277] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.261181 containerd[1597]: 2025-10-28 00:13:49.145 [INFO][4351] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" HandleID="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Workload="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.146 [INFO][4351] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" HandleID="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Workload="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9c466c9c-dq2lt", "timestamp":"2025-10-28 00:13:49.145761492 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.146 [INFO][4351] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.606 [INFO][4351] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.606 [INFO][4351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.655 [INFO][4351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" host="localhost" Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.670 [INFO][4351] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.680 [INFO][4351] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.684 [INFO][4351] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.982 [INFO][4351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:50.261483 containerd[1597]: 2025-10-28 00:13:49.982 [INFO][4351] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" host="localhost" Oct 28 00:13:50.261747 containerd[1597]: 2025-10-28 00:13:50.070 [INFO][4351] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c Oct 28 00:13:50.261747 containerd[1597]: 2025-10-28 00:13:50.091 [INFO][4351] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" host="localhost" Oct 28 00:13:50.261747 containerd[1597]: 2025-10-28 00:13:50.112 [INFO][4351] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" host="localhost" Oct 28 00:13:50.261747 containerd[1597]: 2025-10-28 00:13:50.112 [INFO][4351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" host="localhost" Oct 28 00:13:50.261747 containerd[1597]: 2025-10-28 00:13:50.112 [INFO][4351] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:50.261747 containerd[1597]: 2025-10-28 00:13:50.112 [INFO][4351] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" HandleID="k8s-pod-network.781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Workload="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.261868 containerd[1597]: 2025-10-28 00:13:50.116 [INFO][4277] cni-plugin/k8s.go 418: Populated endpoint ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0", GenerateName:"calico-apiserver-9c466c9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"02d0f3a2-6615-4333-9168-153cfad8a1a2", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c466c9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9c466c9c-dq2lt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c262b8a09b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:50.261933 containerd[1597]: 2025-10-28 00:13:50.116 [INFO][4277] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.261933 containerd[1597]: 2025-10-28 00:13:50.117 [INFO][4277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c262b8a09b ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.261933 containerd[1597]: 2025-10-28 00:13:50.128 [INFO][4277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.261999 containerd[1597]: 2025-10-28 00:13:50.129 [INFO][4277] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0", GenerateName:"calico-apiserver-9c466c9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"02d0f3a2-6615-4333-9168-153cfad8a1a2", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c466c9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c", Pod:"calico-apiserver-9c466c9c-dq2lt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c262b8a09b", MAC:"7a:87:e3:12:9b:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:50.262052 containerd[1597]: 2025-10-28 00:13:50.254 [INFO][4277] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" Namespace="calico-apiserver" Pod="calico-apiserver-9c466c9c-dq2lt" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c466c9c--dq2lt-eth0" Oct 28 00:13:50.291252 containerd[1597]: time="2025-10-28T00:13:50.291193810Z" level=info msg="connecting to shim 781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c" address="unix:///run/containerd/s/3dffbf9351e0e4529f4c53b09a336c20cce07a5c66a65a838f6b161234527a47" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:50.295136 systemd-networkd[1499]: cali8cc6384bfba: Link UP Oct 28 00:13:50.296283 systemd-networkd[1499]: cali8cc6384bfba: Gained carrier Oct 28 00:13:50.308647 containerd[1597]: time="2025-10-28T00:13:50.308575673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-29kbz,Uid:eb03ae87-19b8-4ccf-ad2d-924a8b3b4421,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41c96f96547ee6441c03c5d40e6e528d73a0d18195be62da3aaf03da3deb4f8d\"" Oct 28 00:13:50.319269 containerd[1597]: 2025-10-28 00:13:49.125 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0 calico-apiserver-76b9897cff- calico-apiserver 3e103a9d-d0d4-4b11-9367-559f1c47a552 863 0 2025-10-28 00:13:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76b9897cff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76b9897cff-8q7s2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cc6384bfba [] [] }} ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-" Oct 28 00:13:50.319269 containerd[1597]: 2025-10-28 00:13:49.126 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.319269 containerd[1597]: 2025-10-28 00:13:49.303 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" HandleID="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Workload="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:49.303 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" HandleID="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Workload="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76b9897cff-8q7s2", "timestamp":"2025-10-28 00:13:49.303197859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:49.303 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.112 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.112 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.123 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" host="localhost" Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.133 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.257 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.260 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.263 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:50.319547 containerd[1597]: 2025-10-28 00:13:50.263 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" host="localhost" Oct 28 00:13:50.319867 containerd[1597]: 2025-10-28 00:13:50.267 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc Oct 28 00:13:50.319867 containerd[1597]: 2025-10-28 00:13:50.271 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" host="localhost" Oct 28 00:13:50.319867 containerd[1597]: 2025-10-28 00:13:50.280 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" host="localhost" Oct 28 00:13:50.319867 containerd[1597]: 2025-10-28 00:13:50.281 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" host="localhost" Oct 28 00:13:50.319867 containerd[1597]: 2025-10-28 00:13:50.281 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:50.319867 containerd[1597]: 2025-10-28 00:13:50.281 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" HandleID="k8s-pod-network.daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Workload="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.320043 containerd[1597]: 2025-10-28 00:13:50.286 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0", GenerateName:"calico-apiserver-76b9897cff-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e103a9d-d0d4-4b11-9367-559f1c47a552", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76b9897cff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76b9897cff-8q7s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cc6384bfba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:50.320120 containerd[1597]: 2025-10-28 00:13:50.286 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.320120 containerd[1597]: 2025-10-28 00:13:50.287 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cc6384bfba ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.320120 containerd[1597]: 2025-10-28 00:13:50.297 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.320240 containerd[1597]: 2025-10-28 00:13:50.303 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0", GenerateName:"calico-apiserver-76b9897cff-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e103a9d-d0d4-4b11-9367-559f1c47a552", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76b9897cff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc", Pod:"calico-apiserver-76b9897cff-8q7s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cc6384bfba", MAC:"22:6b:87:ff:8b:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:50.320314 containerd[1597]: 2025-10-28 00:13:50.314 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" Namespace="calico-apiserver" Pod="calico-apiserver-76b9897cff-8q7s2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76b9897cff--8q7s2-eth0" Oct 28 00:13:50.329736 systemd[1]: Started cri-containerd-781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c.scope - libcontainer container 781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c. Oct 28 00:13:50.349498 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:50.354160 containerd[1597]: time="2025-10-28T00:13:50.354108335Z" level=info msg="connecting to shim daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc" address="unix:///run/containerd/s/eae82f99b21e3dd527a2250d6c122b657b9c919e285a509c394910eb4c5a1206" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:50.391576 systemd[1]: Started cri-containerd-daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc.scope - libcontainer container daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc. Oct 28 00:13:50.396476 containerd[1597]: time="2025-10-28T00:13:50.396403562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c466c9c-dq2lt,Uid:02d0f3a2-6615-4333-9168-153cfad8a1a2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"781d34258fdc60fc414b9e0af1cfacd0252e983f5dca11e0025a79b988b5484c\"" Oct 28 00:13:50.409911 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:50.444670 containerd[1597]: time="2025-10-28T00:13:50.444600162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b9897cff-8q7s2,Uid:3e103a9d-d0d4-4b11-9367-559f1c47a552,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"daa586e7797c5c5b78120eaac2e71675823bc07f8687b7f257efda14d2771bbc\"" Oct 28 00:13:50.474617 containerd[1597]: time="2025-10-28T00:13:50.474568311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:50.475823 containerd[1597]: time="2025-10-28T00:13:50.475743296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 00:13:50.475974 containerd[1597]: time="2025-10-28T00:13:50.475752182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 00:13:50.476223 kubelet[2753]: E1028 00:13:50.476176 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 00:13:50.476342 kubelet[2753]: E1028 00:13:50.476309 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 00:13:50.476576 kubelet[2753]: E1028 00:13:50.476539 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dccbd5fb7-7mmn5_calico-system(e002b83f-c358-4e16-aba8-6f13c28c0b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:50.476666 kubelet[2753]: E1028 00:13:50.476610 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:13:50.476852 containerd[1597]: time="2025-10-28T00:13:50.476828912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:13:50.655728 systemd-networkd[1499]: cali09e4a65a91d: Gained IPv6LL Oct 28 00:13:50.684581 kubelet[2753]: E1028 00:13:50.684500 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:13:50.688022 kubelet[2753]: E1028 00:13:50.687970 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pd76c" podUID="857b2565-c255-4b2a-a804-4d8f469fd36f" Oct 28 00:13:50.824365 kubelet[2753]: E1028 00:13:50.824319 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:50.824913 containerd[1597]: time="2025-10-28T00:13:50.824868940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v8g72,Uid:ffaa584b-c0eb-4855-80a7-bb13ffeca77a,Namespace:kube-system,Attempt:0,}" Oct 28 00:13:50.832116 containerd[1597]: time="2025-10-28T00:13:50.831919028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgknx,Uid:9cf7db7c-cf1f-40f0-bd37-4896435636ad,Namespace:calico-system,Attempt:0,}" Oct 28 00:13:50.834208 kubelet[2753]: E1028 00:13:50.834156 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:50.835222 containerd[1597]: time="2025-10-28T00:13:50.834520850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-flcqw,Uid:0e5367a5-74db-4442-a121-3a4c264915e4,Namespace:kube-system,Attempt:0,}" Oct 28 00:13:50.868010 containerd[1597]: time="2025-10-28T00:13:50.867953599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:50.869690 containerd[1597]: time="2025-10-28T00:13:50.869634553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:13:50.869800 containerd[1597]: time="2025-10-28T00:13:50.869667074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:13:50.869994 kubelet[2753]: E1028 00:13:50.869943 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:13:50.870072 kubelet[2753]: E1028 00:13:50.870010 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:13:50.870368 kubelet[2753]: E1028 00:13:50.870256 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-9c466c9c-29kbz_calico-apiserver(eb03ae87-19b8-4ccf-ad2d-924a8b3b4421): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:50.870368 kubelet[2753]: E1028 00:13:50.870304 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" podUID="eb03ae87-19b8-4ccf-ad2d-924a8b3b4421" Oct 28 00:13:50.871120 containerd[1597]: time="2025-10-28T00:13:50.871053355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:13:50.975795 systemd-networkd[1499]: cali19e5b39ebd8: Gained IPv6LL Oct 28 00:13:50.984005 systemd-networkd[1499]: cali3401bb92846: Link UP Oct 28 00:13:50.985263 systemd-networkd[1499]: cali3401bb92846: Gained carrier Oct 28 00:13:51.006070 containerd[1597]: 2025-10-28 00:13:50.890 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hgknx-eth0 csi-node-driver- calico-system 9cf7db7c-cf1f-40f0-bd37-4896435636ad 733 0 2025-10-28 00:13:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hgknx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3401bb92846 [] [] }} ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-" Oct 28 00:13:51.006070 containerd[1597]: 2025-10-28 00:13:50.890 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.006070 containerd[1597]: 2025-10-28 00:13:50.926 [INFO][4650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" HandleID="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Workload="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.926 [INFO][4650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" HandleID="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Workload="localhost-k8s-csi--node--driver--hgknx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003328e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hgknx", "timestamp":"2025-10-28 00:13:50.926015559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.926 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.926 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.926 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.938 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" host="localhost" Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.946 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.953 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.956 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.959 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:51.006337 containerd[1597]: 2025-10-28 00:13:50.959 [INFO][4650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" host="localhost" Oct 28 00:13:51.006632 containerd[1597]: 2025-10-28 00:13:50.961 [INFO][4650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac Oct 28 00:13:51.006632 containerd[1597]: 2025-10-28 00:13:50.964 [INFO][4650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" host="localhost" Oct 28 00:13:51.006632 containerd[1597]: 2025-10-28 00:13:50.971 [INFO][4650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" host="localhost" Oct 28 00:13:51.006632 containerd[1597]: 2025-10-28 00:13:50.971 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" host="localhost" Oct 28 00:13:51.006632 containerd[1597]: 2025-10-28 00:13:50.971 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:51.006632 containerd[1597]: 2025-10-28 00:13:50.972 [INFO][4650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" HandleID="k8s-pod-network.de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Workload="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.006786 containerd[1597]: 2025-10-28 00:13:50.975 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hgknx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cf7db7c-cf1f-40f0-bd37-4896435636ad", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hgknx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3401bb92846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:51.006858 containerd[1597]: 2025-10-28 00:13:50.975 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.006858 containerd[1597]: 2025-10-28 00:13:50.976 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3401bb92846 ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.006858 containerd[1597]: 2025-10-28 00:13:50.986 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.007014 containerd[1597]: 2025-10-28 00:13:50.987 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hgknx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cf7db7c-cf1f-40f0-bd37-4896435636ad", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac", Pod:"csi-node-driver-hgknx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3401bb92846", MAC:"22:cb:29:2a:6e:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:51.007090 containerd[1597]: 2025-10-28 00:13:51.001 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" Namespace="calico-system" Pod="csi-node-driver-hgknx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgknx-eth0" Oct 28 00:13:51.047454 containerd[1597]: time="2025-10-28T00:13:51.047301080Z" level=info msg="connecting to shim de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac" address="unix:///run/containerd/s/6dd84d40e0d9c607b23a30a9beb60e688006cfb6cd60e1700e7bcc3fa972e6d1" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:51.089618 systemd[1]: Started cri-containerd-de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac.scope - libcontainer container de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac. Oct 28 00:13:51.109568 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:51.129276 containerd[1597]: time="2025-10-28T00:13:51.128949388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgknx,Uid:9cf7db7c-cf1f-40f0-bd37-4896435636ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"de4709dbe7252771771c3e619e53e0cc07e3e3e72efd88c70ae8319c0012b4ac\"" Oct 28 00:13:51.143792 systemd-networkd[1499]: calicbc8f254541: Link UP Oct 28 00:13:51.144317 systemd-networkd[1499]: calicbc8f254541: Gained carrier Oct 28 00:13:51.160551 containerd[1597]: 2025-10-28 00:13:50.884 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--flcqw-eth0 coredns-66bc5c9577- kube-system 0e5367a5-74db-4442-a121-3a4c264915e4 866 0 2025-10-28 00:13:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-flcqw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbc8f254541 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-" Oct 28 00:13:51.160551 containerd[1597]: 2025-10-28 00:13:50.884 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.160551 containerd[1597]: 2025-10-28 00:13:50.951 [INFO][4649] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" HandleID="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Workload="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:50.952 [INFO][4649] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" HandleID="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Workload="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-flcqw", "timestamp":"2025-10-28 00:13:50.951491028 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:50.952 [INFO][4649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:50.972 [INFO][4649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:50.972 [INFO][4649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:51.039 [INFO][4649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" host="localhost" Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:51.051 [INFO][4649] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:51.062 [INFO][4649] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:51.065 [INFO][4649] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:51.068 [INFO][4649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:51.160777 containerd[1597]: 2025-10-28 00:13:51.070 [INFO][4649] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" host="localhost" Oct 28 00:13:51.161036 containerd[1597]: 2025-10-28 00:13:51.103 [INFO][4649] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1 Oct 28 00:13:51.161036 containerd[1597]: 2025-10-28 00:13:51.130 [INFO][4649] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" host="localhost" Oct 28 00:13:51.161036 containerd[1597]: 2025-10-28 00:13:51.137 [INFO][4649] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" host="localhost" Oct 28 00:13:51.161036 containerd[1597]: 2025-10-28 00:13:51.137 [INFO][4649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" host="localhost" Oct 28 00:13:51.161036 containerd[1597]: 2025-10-28 00:13:51.137 [INFO][4649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:51.161036 containerd[1597]: 2025-10-28 00:13:51.137 [INFO][4649] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" HandleID="k8s-pod-network.93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Workload="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.161151 containerd[1597]: 2025-10-28 00:13:51.140 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--flcqw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0e5367a5-74db-4442-a121-3a4c264915e4", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-flcqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbc8f254541", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:51.161151 containerd[1597]: 2025-10-28 00:13:51.140 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.161151 containerd[1597]: 2025-10-28 00:13:51.140 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbc8f254541 ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.161151 containerd[1597]: 2025-10-28 00:13:51.144 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.161151 containerd[1597]: 2025-10-28 00:13:51.144 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--flcqw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0e5367a5-74db-4442-a121-3a4c264915e4", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1", Pod:"coredns-66bc5c9577-flcqw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbc8f254541", MAC:"e6:3e:d5:5d:38:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:51.161151 containerd[1597]: 2025-10-28 00:13:51.155 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" Namespace="kube-system" Pod="coredns-66bc5c9577-flcqw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--flcqw-eth0" Oct 28 00:13:51.193581 containerd[1597]: time="2025-10-28T00:13:51.193512982Z" level=info msg="connecting to shim 93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1" address="unix:///run/containerd/s/f1f21ee62c0d72ae6a1f6e605bc9de8720e1adf4d74891bc3615c1d370b0e7f1" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:51.196916 systemd-networkd[1499]: calib7840544520: Link UP Oct 28 00:13:51.198079 systemd-networkd[1499]: calib7840544520: Gained carrier Oct 28 00:13:51.208965 containerd[1597]: time="2025-10-28T00:13:51.207158274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:51.210367 containerd[1597]: time="2025-10-28T00:13:51.210074174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:13:51.210565 containerd[1597]: time="2025-10-28T00:13:51.210268569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:13:51.210921 kubelet[2753]: E1028 00:13:51.210855 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:13:51.210921 kubelet[2753]: E1028 00:13:51.210915 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:13:51.212001 kubelet[2753]: E1028 00:13:51.211137 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-9c466c9c-dq2lt_calico-apiserver(02d0f3a2-6615-4333-9168-153cfad8a1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:51.212001 kubelet[2753]: E1028 00:13:51.211186 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" podUID="02d0f3a2-6615-4333-9168-153cfad8a1a2" Oct 28 00:13:51.212667 containerd[1597]: time="2025-10-28T00:13:51.212634348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:50.894 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--v8g72-eth0 coredns-66bc5c9577- kube-system ffaa584b-c0eb-4855-80a7-bb13ffeca77a 859 0 2025-10-28 00:13:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-v8g72 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7840544520 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:50.895 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:50.964 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" HandleID="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Workload="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:50.964 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" HandleID="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Workload="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-v8g72", "timestamp":"2025-10-28 00:13:50.964042097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:50.964 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.137 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.138 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.149 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.161 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.167 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.171 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.174 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.174 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.176 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.181 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.190 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.190 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" host="localhost" Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.190 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 00:13:51.223467 containerd[1597]: 2025-10-28 00:13:51.190 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" HandleID="k8s-pod-network.df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Workload="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.223995 containerd[1597]: 2025-10-28 00:13:51.194 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--v8g72-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ffaa584b-c0eb-4855-80a7-bb13ffeca77a", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-v8g72", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7840544520", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:51.223995 containerd[1597]: 2025-10-28 00:13:51.194 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.223995 containerd[1597]: 2025-10-28 00:13:51.194 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7840544520 ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.223995 containerd[1597]: 2025-10-28 00:13:51.197 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.223995 containerd[1597]: 2025-10-28 00:13:51.199 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--v8g72-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ffaa584b-c0eb-4855-80a7-bb13ffeca77a", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd", Pod:"coredns-66bc5c9577-v8g72", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7840544520", MAC:"4e:33:99:f3:93:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 00:13:51.223995 containerd[1597]: 2025-10-28 00:13:51.217 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" Namespace="kube-system" Pod="coredns-66bc5c9577-v8g72" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--v8g72-eth0" Oct 28 00:13:51.232607 systemd[1]: Started cri-containerd-93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1.scope - libcontainer container 93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1. Oct 28 00:13:51.253448 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:51.258624 containerd[1597]: time="2025-10-28T00:13:51.258344198Z" level=info msg="connecting to shim df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd" address="unix:///run/containerd/s/595346ec07d0ff2cd4e2e4b183e5ca218e07115eb56aec1c9aaa345763c9a888" namespace=k8s.io protocol=ttrpc version=3 Oct 28 00:13:51.292647 systemd[1]: Started cri-containerd-df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd.scope - libcontainer container df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd. Oct 28 00:13:51.300760 containerd[1597]: time="2025-10-28T00:13:51.300653409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-flcqw,Uid:0e5367a5-74db-4442-a121-3a4c264915e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1\"" Oct 28 00:13:51.301740 kubelet[2753]: E1028 00:13:51.301520 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:51.309443 containerd[1597]: time="2025-10-28T00:13:51.309370303Z" level=info msg="CreateContainer within sandbox \"93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 00:13:51.310557 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 00:13:51.329787 containerd[1597]: time="2025-10-28T00:13:51.329739801Z" level=info msg="Container e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:51.337874 containerd[1597]: time="2025-10-28T00:13:51.337832534Z" level=info msg="CreateContainer within sandbox \"93262e7c8314f577f8c81e9b4f4d26d57d0ea7145ed052a410eb6736a2f4cbc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8\"" Oct 28 00:13:51.338804 containerd[1597]: time="2025-10-28T00:13:51.338622185Z" level=info msg="StartContainer for \"e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8\"" Oct 28 00:13:51.339656 containerd[1597]: time="2025-10-28T00:13:51.339616010Z" level=info msg="connecting to shim e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8" address="unix:///run/containerd/s/f1f21ee62c0d72ae6a1f6e605bc9de8720e1adf4d74891bc3615c1d370b0e7f1" protocol=ttrpc version=3 Oct 28 00:13:51.347206 containerd[1597]: time="2025-10-28T00:13:51.347148422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v8g72,Uid:ffaa584b-c0eb-4855-80a7-bb13ffeca77a,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd\"" Oct 28 00:13:51.348172 kubelet[2753]: E1028 00:13:51.348133 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:51.364061 containerd[1597]: time="2025-10-28T00:13:51.364015108Z" level=info msg="CreateContainer within sandbox \"df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 00:13:51.370651 systemd[1]: Started cri-containerd-e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8.scope - libcontainer container e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8. Oct 28 00:13:51.373738 containerd[1597]: time="2025-10-28T00:13:51.373689599Z" level=info msg="Container 4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a: CDI devices from CRI Config.CDIDevices: []" Oct 28 00:13:51.382438 containerd[1597]: time="2025-10-28T00:13:51.380909124Z" level=info msg="CreateContainer within sandbox \"df4e5c5ee877fc7aca7bf9794871e5777ae832bb892dafacf2cf788ec3b1fdfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a\"" Oct 28 00:13:51.382438 containerd[1597]: time="2025-10-28T00:13:51.381589591Z" level=info msg="StartContainer for \"4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a\"" Oct 28 00:13:51.384825 containerd[1597]: time="2025-10-28T00:13:51.384746463Z" level=info msg="connecting to shim 4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a" address="unix:///run/containerd/s/595346ec07d0ff2cd4e2e4b183e5ca218e07115eb56aec1c9aaa345763c9a888" protocol=ttrpc version=3 Oct 28 00:13:51.410105 systemd[1]: Started cri-containerd-4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a.scope - libcontainer container 4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a. Oct 28 00:13:51.418448 containerd[1597]: time="2025-10-28T00:13:51.417932938Z" level=info msg="StartContainer for \"e40f2d60c2bfcc9607808d9c92a4ed6c6eafdb0feeab112e842c0abd3fb3bbd8\" returns successfully" Oct 28 00:13:51.458610 containerd[1597]: time="2025-10-28T00:13:51.458478219Z" level=info msg="StartContainer for \"4dbca1c3fcbef97a9093e3b5930800d7eb6db69ea533414d1d0d7ed2fc39908a\" returns successfully" Oct 28 00:13:51.580459 containerd[1597]: time="2025-10-28T00:13:51.580295582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:51.603966 containerd[1597]: time="2025-10-28T00:13:51.603914165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:13:51.604036 containerd[1597]: time="2025-10-28T00:13:51.603920818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:13:51.604316 kubelet[2753]: E1028 00:13:51.604251 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:13:51.604316 kubelet[2753]: E1028 00:13:51.604305 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:13:51.604698 kubelet[2753]: E1028 00:13:51.604579 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76b9897cff-8q7s2_calico-apiserver(3e103a9d-d0d4-4b11-9367-559f1c47a552): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:51.604698 kubelet[2753]: E1028 00:13:51.604643 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" podUID="3e103a9d-d0d4-4b11-9367-559f1c47a552" Oct 28 00:13:51.604940 containerd[1597]: time="2025-10-28T00:13:51.604733482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 00:13:51.696035 kubelet[2753]: E1028 00:13:51.695725 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:51.701522 kubelet[2753]: E1028 00:13:51.701397 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:51.707918 kubelet[2753]: E1028 00:13:51.707881 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" podUID="3e103a9d-d0d4-4b11-9367-559f1c47a552" Oct 28 00:13:51.708188 kubelet[2753]: E1028 00:13:51.707929 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" podUID="eb03ae87-19b8-4ccf-ad2d-924a8b3b4421" Oct 28 00:13:51.708188 kubelet[2753]: E1028 00:13:51.708005 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" podUID="02d0f3a2-6615-4333-9168-153cfad8a1a2" Oct 28 00:13:51.708188 kubelet[2753]: E1028 00:13:51.708011 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:13:51.713777 kubelet[2753]: I1028 00:13:51.713672 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v8g72" podStartSLOduration=45.713627619 podStartE2EDuration="45.713627619s" podCreationTimestamp="2025-10-28 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 00:13:51.712819352 +0000 UTC m=+51.990450966" watchObservedRunningTime="2025-10-28 00:13:51.713627619 +0000 UTC m=+51.991259223" Oct 28 00:13:51.785360 kubelet[2753]: I1028 00:13:51.785280 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-flcqw" podStartSLOduration=45.785259725 podStartE2EDuration="45.785259725s" podCreationTimestamp="2025-10-28 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 00:13:51.783782363 +0000 UTC m=+52.061413967" watchObservedRunningTime="2025-10-28 00:13:51.785259725 +0000 UTC m=+52.062891329" Oct 28 00:13:51.871670 systemd-networkd[1499]: cali4c262b8a09b: Gained IPv6LL Oct 28 00:13:51.957528 containerd[1597]: time="2025-10-28T00:13:51.957477789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:51.958618 containerd[1597]: time="2025-10-28T00:13:51.958582803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 00:13:51.958715 containerd[1597]: time="2025-10-28T00:13:51.958659858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 00:13:51.958856 kubelet[2753]: E1028 00:13:51.958812 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 00:13:51.958911 kubelet[2753]: E1028 00:13:51.958862 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 00:13:51.958999 kubelet[2753]: E1028 00:13:51.958974 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:51.959731 containerd[1597]: time="2025-10-28T00:13:51.959701231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 00:13:51.999986 systemd-networkd[1499]: cali8cc6384bfba: Gained IPv6LL Oct 28 00:13:52.319921 systemd-networkd[1499]: calib7840544520: Gained IPv6LL Oct 28 00:13:52.327077 containerd[1597]: time="2025-10-28T00:13:52.327004759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:13:52.328514 containerd[1597]: time="2025-10-28T00:13:52.328467434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 00:13:52.329062 containerd[1597]: time="2025-10-28T00:13:52.328545360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 00:13:52.329122 kubelet[2753]: E1028 00:13:52.328731 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 00:13:52.329122 kubelet[2753]: E1028 00:13:52.328794 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 00:13:52.329122 kubelet[2753]: E1028 00:13:52.328900 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 00:13:52.329525 kubelet[2753]: E1028 00:13:52.328967 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:52.383659 systemd-networkd[1499]: cali3401bb92846: Gained IPv6LL Oct 28 00:13:52.447674 systemd-networkd[1499]: calicbc8f254541: Gained IPv6LL Oct 28 00:13:52.707363 kubelet[2753]: E1028 00:13:52.707321 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:52.708154 kubelet[2753]: E1028 00:13:52.708122 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:52.709399 kubelet[2753]: E1028 00:13:52.709299 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:13:53.656906 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:41958.service - OpenSSH per-connection server daemon (10.0.0.1:41958). Oct 28 00:13:53.709909 kubelet[2753]: E1028 00:13:53.709821 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:53.710334 kubelet[2753]: E1028 00:13:53.709952 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:13:53.728816 sshd[4920]: Accepted publickey for core from 10.0.0.1 port 41958 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:13:53.730735 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:13:53.735500 systemd-logind[1578]: New session 10 of user core. Oct 28 00:13:53.747601 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 00:13:53.882370 sshd[4923]: Connection closed by 10.0.0.1 port 41958 Oct 28 00:13:53.882687 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Oct 28 00:13:53.886906 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:41958.service: Deactivated successfully. Oct 28 00:13:53.889483 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 00:13:53.890336 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Oct 28 00:13:53.891631 systemd-logind[1578]: Removed session 10. Oct 28 00:13:58.899514 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:41966.service - OpenSSH per-connection server daemon (10.0.0.1:41966). Oct 28 00:13:58.981276 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 41966 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:13:58.983646 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:13:58.990166 systemd-logind[1578]: New session 11 of user core. Oct 28 00:13:58.999705 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 00:13:59.135535 sshd[4948]: Connection closed by 10.0.0.1 port 41966 Oct 28 00:13:59.134459 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Oct 28 00:13:59.148521 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:41966.service: Deactivated successfully. Oct 28 00:13:59.151030 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 00:13:59.152199 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Oct 28 00:13:59.155311 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:41974.service - OpenSSH per-connection server daemon (10.0.0.1:41974). Oct 28 00:13:59.156048 systemd-logind[1578]: Removed session 11. Oct 28 00:13:59.213180 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 41974 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:13:59.215567 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:13:59.220695 systemd-logind[1578]: New session 12 of user core. Oct 28 00:13:59.237581 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 00:13:59.667756 sshd[4966]: Connection closed by 10.0.0.1 port 41974 Oct 28 00:13:59.668108 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Oct 28 00:13:59.682323 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:41974.service: Deactivated successfully. Oct 28 00:13:59.684285 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 00:13:59.685099 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Oct 28 00:13:59.687657 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:41990.service - OpenSSH per-connection server daemon (10.0.0.1:41990). Oct 28 00:13:59.688386 systemd-logind[1578]: Removed session 12. Oct 28 00:13:59.751370 sshd[4977]: Accepted publickey for core from 10.0.0.1 port 41990 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:13:59.753097 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:13:59.758297 systemd-logind[1578]: New session 13 of user core. Oct 28 00:13:59.765578 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 00:14:00.207497 sshd[4980]: Connection closed by 10.0.0.1 port 41990 Oct 28 00:14:00.209872 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:00.215951 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:41990.service: Deactivated successfully. Oct 28 00:14:00.218379 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 00:14:00.219438 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Oct 28 00:14:00.220762 systemd-logind[1578]: Removed session 13. Oct 28 00:14:01.827316 containerd[1597]: time="2025-10-28T00:14:01.826971381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 00:14:02.249392 containerd[1597]: time="2025-10-28T00:14:02.249335921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:02.250616 containerd[1597]: time="2025-10-28T00:14:02.250575191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 00:14:02.250616 containerd[1597]: time="2025-10-28T00:14:02.250601563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 00:14:02.250868 kubelet[2753]: E1028 00:14:02.250811 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 00:14:02.250868 kubelet[2753]: E1028 00:14:02.250875 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 00:14:02.251262 kubelet[2753]: E1028 00:14:02.250997 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7df4f7cbc6-jjqdv_calico-system(8f40531a-a7a8-40d1-9aaf-cd96278fb41e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:02.252005 containerd[1597]: time="2025-10-28T00:14:02.251943892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 00:14:02.578955 containerd[1597]: time="2025-10-28T00:14:02.578795912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:02.580320 containerd[1597]: time="2025-10-28T00:14:02.580226010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 00:14:02.580491 containerd[1597]: time="2025-10-28T00:14:02.580354568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 00:14:02.580630 kubelet[2753]: E1028 00:14:02.580561 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 00:14:02.580695 kubelet[2753]: E1028 00:14:02.580635 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 00:14:02.580845 kubelet[2753]: E1028 00:14:02.580764 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7df4f7cbc6-jjqdv_calico-system(8f40531a-a7a8-40d1-9aaf-cd96278fb41e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:02.580905 kubelet[2753]: E1028 00:14:02.580829 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7df4f7cbc6-jjqdv" podUID="8f40531a-a7a8-40d1-9aaf-cd96278fb41e" Oct 28 00:14:02.820794 containerd[1597]: time="2025-10-28T00:14:02.820744829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:14:03.460043 containerd[1597]: time="2025-10-28T00:14:03.459968195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:03.498102 containerd[1597]: time="2025-10-28T00:14:03.498022100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:14:03.498250 containerd[1597]: time="2025-10-28T00:14:03.498059503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:03.498385 kubelet[2753]: E1028 00:14:03.498315 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:03.498385 kubelet[2753]: E1028 00:14:03.498384 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:03.498837 kubelet[2753]: E1028 00:14:03.498637 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-9c466c9c-dq2lt_calico-apiserver(02d0f3a2-6615-4333-9168-153cfad8a1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:03.498837 kubelet[2753]: E1028 00:14:03.498700 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" podUID="02d0f3a2-6615-4333-9168-153cfad8a1a2" Oct 28 00:14:03.498948 containerd[1597]: time="2025-10-28T00:14:03.498854014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 00:14:03.941575 containerd[1597]: time="2025-10-28T00:14:03.941519019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:03.977683 containerd[1597]: time="2025-10-28T00:14:03.977616051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 00:14:03.977683 containerd[1597]: time="2025-10-28T00:14:03.977652051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 00:14:03.977930 kubelet[2753]: E1028 00:14:03.977884 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 00:14:03.977983 kubelet[2753]: E1028 00:14:03.977936 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 00:14:03.978056 kubelet[2753]: E1028 00:14:03.978031 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dccbd5fb7-7mmn5_calico-system(e002b83f-c358-4e16-aba8-6f13c28c0b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:03.978164 kubelet[2753]: E1028 00:14:03.978065 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:14:04.820890 containerd[1597]: time="2025-10-28T00:14:04.820816659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 00:14:05.225002 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:37478.service - OpenSSH per-connection server daemon (10.0.0.1:37478). Oct 28 00:14:05.285576 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 37478 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:05.287357 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:05.295344 systemd-logind[1578]: New session 14 of user core. Oct 28 00:14:05.313689 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 00:14:05.328528 containerd[1597]: time="2025-10-28T00:14:05.328474006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:05.330088 containerd[1597]: time="2025-10-28T00:14:05.329905181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 00:14:05.330088 containerd[1597]: time="2025-10-28T00:14:05.329932203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:05.330268 kubelet[2753]: E1028 00:14:05.330213 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 00:14:05.330703 kubelet[2753]: E1028 00:14:05.330275 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 00:14:05.330703 kubelet[2753]: E1028 00:14:05.330506 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pd76c_calico-system(857b2565-c255-4b2a-a804-4d8f469fd36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:05.330703 kubelet[2753]: E1028 00:14:05.330553 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pd76c" podUID="857b2565-c255-4b2a-a804-4d8f469fd36f" Oct 28 00:14:05.330813 containerd[1597]: time="2025-10-28T00:14:05.330762441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:14:05.444493 sshd[5003]: Connection closed by 10.0.0.1 port 37478 Oct 28 00:14:05.444893 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:05.450379 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:37478.service: Deactivated successfully. Oct 28 00:14:05.452564 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 00:14:05.453482 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Oct 28 00:14:05.454922 systemd-logind[1578]: Removed session 14. Oct 28 00:14:05.678138 containerd[1597]: time="2025-10-28T00:14:05.677574831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:05.680275 containerd[1597]: time="2025-10-28T00:14:05.680200114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:14:05.680358 containerd[1597]: time="2025-10-28T00:14:05.680266322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:05.680613 kubelet[2753]: E1028 00:14:05.680553 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:05.680715 kubelet[2753]: E1028 00:14:05.680615 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:05.680764 kubelet[2753]: E1028 00:14:05.680732 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-9c466c9c-29kbz_calico-apiserver(eb03ae87-19b8-4ccf-ad2d-924a8b3b4421): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:05.680820 kubelet[2753]: E1028 00:14:05.680778 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" podUID="eb03ae87-19b8-4ccf-ad2d-924a8b3b4421" Oct 28 00:14:05.822236 containerd[1597]: time="2025-10-28T00:14:05.822181779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:14:06.219097 containerd[1597]: time="2025-10-28T00:14:06.218951137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:06.242958 containerd[1597]: time="2025-10-28T00:14:06.242871958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:06.242958 containerd[1597]: time="2025-10-28T00:14:06.242901444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:14:06.243273 kubelet[2753]: E1028 00:14:06.243230 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:06.243359 kubelet[2753]: E1028 00:14:06.243288 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:06.243655 kubelet[2753]: E1028 00:14:06.243584 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76b9897cff-8q7s2_calico-apiserver(3e103a9d-d0d4-4b11-9367-559f1c47a552): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:06.243725 kubelet[2753]: E1028 00:14:06.243662 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" podUID="3e103a9d-d0d4-4b11-9367-559f1c47a552" Oct 28 00:14:06.243814 containerd[1597]: time="2025-10-28T00:14:06.243762460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 00:14:06.682217 containerd[1597]: time="2025-10-28T00:14:06.682161244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:06.720140 containerd[1597]: time="2025-10-28T00:14:06.720023790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 00:14:06.720140 containerd[1597]: time="2025-10-28T00:14:06.720068926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 00:14:06.720458 kubelet[2753]: E1028 00:14:06.720373 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 00:14:06.720883 kubelet[2753]: E1028 00:14:06.720456 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 00:14:06.720883 kubelet[2753]: E1028 00:14:06.720582 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:06.721599 containerd[1597]: time="2025-10-28T00:14:06.721572148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 00:14:07.179110 containerd[1597]: time="2025-10-28T00:14:07.179033464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:07.206493 containerd[1597]: time="2025-10-28T00:14:07.206348996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 00:14:07.206675 containerd[1597]: time="2025-10-28T00:14:07.206462234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 00:14:07.206891 kubelet[2753]: E1028 00:14:07.206817 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 00:14:07.206891 kubelet[2753]: E1028 00:14:07.206887 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 00:14:07.207046 kubelet[2753]: E1028 00:14:07.207002 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:07.207119 kubelet[2753]: E1028 00:14:07.207069 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:14:10.462852 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:37494.service - OpenSSH per-connection server daemon (10.0.0.1:37494). Oct 28 00:14:10.531979 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 37494 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:10.534004 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:10.539684 systemd-logind[1578]: New session 15 of user core. Oct 28 00:14:10.548684 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 00:14:10.756923 sshd[5030]: Connection closed by 10.0.0.1 port 37494 Oct 28 00:14:10.757228 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:10.763742 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:37494.service: Deactivated successfully. Oct 28 00:14:10.765753 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 00:14:10.766693 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Oct 28 00:14:10.768189 systemd-logind[1578]: Removed session 15. Oct 28 00:14:13.820756 kubelet[2753]: E1028 00:14:13.820702 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" podUID="02d0f3a2-6615-4333-9168-153cfad8a1a2" Oct 28 00:14:14.665923 kubelet[2753]: E1028 00:14:14.665887 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:14:14.790380 containerd[1597]: time="2025-10-28T00:14:14.790319558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249\" id:\"84e3fb479d75e9cf687bf8f6a555e733628fd4635f7e51860bd8a3eaaa79c2ec\" pid:5057 exit_status:1 exited_at:{seconds:1761610454 nanos:789604870}" Oct 28 00:14:14.875484 containerd[1597]: time="2025-10-28T00:14:14.875427917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39e38e3727647a05e1e51a358b8cb2723290fde9671fed4767d68c6b9a4af249\" id:\"410dcb6a8d9b9dfac1d8892a33da24c247bbef1f4030795927d1cf06fcd16344\" pid:5081 exit_status:1 exited_at:{seconds:1761610454 nanos:875084420}" Oct 28 00:14:15.771714 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:48130.service - OpenSSH per-connection server daemon (10.0.0.1:48130). Oct 28 00:14:15.854063 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 48130 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:15.856748 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:15.863448 systemd-logind[1578]: New session 16 of user core. Oct 28 00:14:15.871631 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 00:14:16.025657 sshd[5098]: Connection closed by 10.0.0.1 port 48130 Oct 28 00:14:16.025957 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:16.031770 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:48130.service: Deactivated successfully. Oct 28 00:14:16.034318 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 00:14:16.035165 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Oct 28 00:14:16.036547 systemd-logind[1578]: Removed session 16. Oct 28 00:14:16.820896 kubelet[2753]: E1028 00:14:16.820854 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7df4f7cbc6-jjqdv" podUID="8f40531a-a7a8-40d1-9aaf-cd96278fb41e" Oct 28 00:14:18.820254 kubelet[2753]: E1028 00:14:18.820187 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:14:19.820862 kubelet[2753]: E1028 00:14:19.820556 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" podUID="3e103a9d-d0d4-4b11-9367-559f1c47a552" Oct 28 00:14:19.821947 kubelet[2753]: E1028 00:14:19.821810 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:14:20.820602 kubelet[2753]: E1028 00:14:20.820549 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:14:20.820918 kubelet[2753]: E1028 00:14:20.820768 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pd76c" podUID="857b2565-c255-4b2a-a804-4d8f469fd36f" Oct 28 00:14:20.821542 kubelet[2753]: E1028 00:14:20.821496 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" podUID="eb03ae87-19b8-4ccf-ad2d-924a8b3b4421" Oct 28 00:14:21.047192 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:48132.service - OpenSSH per-connection server daemon (10.0.0.1:48132). Oct 28 00:14:21.103178 sshd[5115]: Accepted publickey for core from 10.0.0.1 port 48132 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:21.105020 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:21.110284 systemd-logind[1578]: New session 17 of user core. Oct 28 00:14:21.116845 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 00:14:21.245777 sshd[5118]: Connection closed by 10.0.0.1 port 48132 Oct 28 00:14:21.246255 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:21.258317 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:48132.service: Deactivated successfully. Oct 28 00:14:21.261210 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 00:14:21.262187 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Oct 28 00:14:21.265862 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:48134.service - OpenSSH per-connection server daemon (10.0.0.1:48134). Oct 28 00:14:21.266812 systemd-logind[1578]: Removed session 17. Oct 28 00:14:21.320357 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 48134 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:21.323066 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:21.330690 systemd-logind[1578]: New session 18 of user core. Oct 28 00:14:21.338724 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 00:14:21.730998 sshd[5134]: Connection closed by 10.0.0.1 port 48134 Oct 28 00:14:21.731789 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:21.743469 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:48134.service: Deactivated successfully. Oct 28 00:14:21.746011 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 00:14:21.747193 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Oct 28 00:14:21.751461 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:48150.service - OpenSSH per-connection server daemon (10.0.0.1:48150). Oct 28 00:14:21.752218 systemd-logind[1578]: Removed session 18. Oct 28 00:14:21.820436 kubelet[2753]: E1028 00:14:21.820091 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:14:21.836640 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 48150 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:21.838773 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:21.844487 systemd-logind[1578]: New session 19 of user core. Oct 28 00:14:21.851548 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 00:14:22.630722 sshd[5149]: Connection closed by 10.0.0.1 port 48150 Oct 28 00:14:22.631032 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:22.644428 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:48150.service: Deactivated successfully. Oct 28 00:14:22.648201 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 00:14:22.651170 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Oct 28 00:14:22.653656 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:48162.service - OpenSSH per-connection server daemon (10.0.0.1:48162). Oct 28 00:14:22.655842 systemd-logind[1578]: Removed session 19. Oct 28 00:14:22.712848 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 48162 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:22.714799 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:22.720243 systemd-logind[1578]: New session 20 of user core. Oct 28 00:14:22.735741 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 00:14:23.034468 sshd[5169]: Connection closed by 10.0.0.1 port 48162 Oct 28 00:14:23.035897 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:23.048823 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:48162.service: Deactivated successfully. Oct 28 00:14:23.051312 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 00:14:23.055593 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Oct 28 00:14:23.059524 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:49954.service - OpenSSH per-connection server daemon (10.0.0.1:49954). Oct 28 00:14:23.060662 systemd-logind[1578]: Removed session 20. Oct 28 00:14:23.118116 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 49954 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:23.120285 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:23.125917 systemd-logind[1578]: New session 21 of user core. Oct 28 00:14:23.133626 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 00:14:23.283077 sshd[5183]: Connection closed by 10.0.0.1 port 49954 Oct 28 00:14:23.283458 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:23.288714 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:49954.service: Deactivated successfully. Oct 28 00:14:23.291273 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 00:14:23.292187 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Oct 28 00:14:23.293302 systemd-logind[1578]: Removed session 21. Oct 28 00:14:26.819114 kubelet[2753]: E1028 00:14:26.819046 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:14:28.303476 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:49960.service - OpenSSH per-connection server daemon (10.0.0.1:49960). Oct 28 00:14:28.547030 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 49960 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:28.550825 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:28.556633 systemd-logind[1578]: New session 22 of user core. Oct 28 00:14:28.563587 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 00:14:28.717081 sshd[5209]: Connection closed by 10.0.0.1 port 49960 Oct 28 00:14:28.717378 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:28.722617 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:49960.service: Deactivated successfully. Oct 28 00:14:28.724912 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 00:14:28.725939 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Oct 28 00:14:28.727618 systemd-logind[1578]: Removed session 22. Oct 28 00:14:28.822671 containerd[1597]: time="2025-10-28T00:14:28.822118284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:14:29.158560 containerd[1597]: time="2025-10-28T00:14:29.158500015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:29.165356 containerd[1597]: time="2025-10-28T00:14:29.165249238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:14:29.165356 containerd[1597]: time="2025-10-28T00:14:29.165310233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:29.165634 kubelet[2753]: E1028 00:14:29.165582 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:29.166045 kubelet[2753]: E1028 00:14:29.165646 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:29.166045 kubelet[2753]: E1028 00:14:29.165864 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-9c466c9c-dq2lt_calico-apiserver(02d0f3a2-6615-4333-9168-153cfad8a1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:29.166045 kubelet[2753]: E1028 00:14:29.165937 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-dq2lt" podUID="02d0f3a2-6615-4333-9168-153cfad8a1a2" Oct 28 00:14:29.166833 containerd[1597]: time="2025-10-28T00:14:29.166518300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 00:14:29.556240 containerd[1597]: time="2025-10-28T00:14:29.556093618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:29.558046 containerd[1597]: time="2025-10-28T00:14:29.557913278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 00:14:29.558139 containerd[1597]: time="2025-10-28T00:14:29.557978171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 00:14:29.558487 kubelet[2753]: E1028 00:14:29.558375 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 00:14:29.558576 kubelet[2753]: E1028 00:14:29.558505 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 00:14:29.558854 kubelet[2753]: E1028 00:14:29.558829 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7df4f7cbc6-jjqdv_calico-system(8f40531a-a7a8-40d1-9aaf-cd96278fb41e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:29.560231 containerd[1597]: time="2025-10-28T00:14:29.560201318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 00:14:29.918814 containerd[1597]: time="2025-10-28T00:14:29.918742806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:30.012743 containerd[1597]: time="2025-10-28T00:14:30.012682701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 00:14:30.012743 containerd[1597]: time="2025-10-28T00:14:30.012716175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 00:14:30.013065 kubelet[2753]: E1028 00:14:30.013017 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 00:14:30.013137 kubelet[2753]: E1028 00:14:30.013071 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 00:14:30.013197 kubelet[2753]: E1028 00:14:30.013172 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7df4f7cbc6-jjqdv_calico-system(8f40531a-a7a8-40d1-9aaf-cd96278fb41e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:30.013259 kubelet[2753]: E1028 00:14:30.013226 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7df4f7cbc6-jjqdv" podUID="8f40531a-a7a8-40d1-9aaf-cd96278fb41e" Oct 28 00:14:32.820799 containerd[1597]: time="2025-10-28T00:14:32.820397092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 00:14:33.200877 containerd[1597]: time="2025-10-28T00:14:33.200809134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:33.202369 containerd[1597]: time="2025-10-28T00:14:33.202304975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 00:14:33.202369 containerd[1597]: time="2025-10-28T00:14:33.202355120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 00:14:33.202671 kubelet[2753]: E1028 00:14:33.202616 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 00:14:33.203116 kubelet[2753]: E1028 00:14:33.202679 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 00:14:33.203116 kubelet[2753]: E1028 00:14:33.202951 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:33.203259 containerd[1597]: time="2025-10-28T00:14:33.203073404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 00:14:33.569997 containerd[1597]: time="2025-10-28T00:14:33.569804420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:33.571622 containerd[1597]: time="2025-10-28T00:14:33.571562559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 00:14:33.571723 containerd[1597]: time="2025-10-28T00:14:33.571663621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 00:14:33.571923 kubelet[2753]: E1028 00:14:33.571863 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 00:14:33.571973 kubelet[2753]: E1028 00:14:33.571926 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 00:14:33.572206 kubelet[2753]: E1028 00:14:33.572165 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dccbd5fb7-7mmn5_calico-system(e002b83f-c358-4e16-aba8-6f13c28c0b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:33.572314 kubelet[2753]: E1028 00:14:33.572237 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dccbd5fb7-7mmn5" podUID="e002b83f-c358-4e16-aba8-6f13c28c0b61" Oct 28 00:14:33.573626 containerd[1597]: time="2025-10-28T00:14:33.573597163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 00:14:33.734647 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:42772.service - OpenSSH per-connection server daemon (10.0.0.1:42772). Oct 28 00:14:33.782605 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 42772 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:33.784248 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:33.789438 systemd-logind[1578]: New session 23 of user core. Oct 28 00:14:33.795558 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 00:14:33.915859 containerd[1597]: time="2025-10-28T00:14:33.915808486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:33.916479 sshd[5229]: Connection closed by 10.0.0.1 port 42772 Oct 28 00:14:33.916909 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:33.917281 containerd[1597]: time="2025-10-28T00:14:33.917189538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 00:14:33.917516 containerd[1597]: time="2025-10-28T00:14:33.917217662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 00:14:33.917690 kubelet[2753]: E1028 00:14:33.917635 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 00:14:33.917790 kubelet[2753]: E1028 00:14:33.917702 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 00:14:33.918009 kubelet[2753]: E1028 00:14:33.917984 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgknx_calico-system(9cf7db7c-cf1f-40f0-bd37-4896435636ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:33.918259 kubelet[2753]: E1028 00:14:33.918162 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgknx" podUID="9cf7db7c-cf1f-40f0-bd37-4896435636ad" Oct 28 00:14:33.918610 containerd[1597]: time="2025-10-28T00:14:33.918571132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 00:14:33.923836 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:42772.service: Deactivated successfully. Oct 28 00:14:33.926124 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 00:14:33.927170 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Oct 28 00:14:33.928513 systemd-logind[1578]: Removed session 23. Oct 28 00:14:34.271166 containerd[1597]: time="2025-10-28T00:14:34.270990971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:34.307534 containerd[1597]: time="2025-10-28T00:14:34.307081837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:34.307534 containerd[1597]: time="2025-10-28T00:14:34.307190744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 00:14:34.307954 kubelet[2753]: E1028 00:14:34.307826 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 00:14:34.307954 kubelet[2753]: E1028 00:14:34.307933 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 00:14:34.308493 kubelet[2753]: E1028 00:14:34.308230 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pd76c_calico-system(857b2565-c255-4b2a-a804-4d8f469fd36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:34.308493 kubelet[2753]: E1028 00:14:34.308307 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pd76c" podUID="857b2565-c255-4b2a-a804-4d8f469fd36f" Oct 28 00:14:34.309143 containerd[1597]: time="2025-10-28T00:14:34.309095179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:14:34.689034 containerd[1597]: time="2025-10-28T00:14:34.688973278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:34.690188 containerd[1597]: time="2025-10-28T00:14:34.690132960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:14:34.690256 containerd[1597]: time="2025-10-28T00:14:34.690198535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:34.690440 kubelet[2753]: E1028 00:14:34.690374 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:34.690494 kubelet[2753]: E1028 00:14:34.690464 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:34.690582 kubelet[2753]: E1028 00:14:34.690556 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-9c466c9c-29kbz_calico-apiserver(eb03ae87-19b8-4ccf-ad2d-924a8b3b4421): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:34.690629 kubelet[2753]: E1028 00:14:34.690589 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9c466c9c-29kbz" podUID="eb03ae87-19b8-4ccf-ad2d-924a8b3b4421" Oct 28 00:14:34.820491 containerd[1597]: time="2025-10-28T00:14:34.820435071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 00:14:35.205716 containerd[1597]: time="2025-10-28T00:14:35.205658427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 00:14:35.207152 containerd[1597]: time="2025-10-28T00:14:35.207089143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 00:14:35.207220 containerd[1597]: time="2025-10-28T00:14:35.207107808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 00:14:35.207367 kubelet[2753]: E1028 00:14:35.207328 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:35.207447 kubelet[2753]: E1028 00:14:35.207379 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 00:14:35.207538 kubelet[2753]: E1028 00:14:35.207507 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76b9897cff-8q7s2_calico-apiserver(3e103a9d-d0d4-4b11-9367-559f1c47a552): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 00:14:35.207610 kubelet[2753]: E1028 00:14:35.207556 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b9897cff-8q7s2" podUID="3e103a9d-d0d4-4b11-9367-559f1c47a552" Oct 28 00:14:36.819909 kubelet[2753]: E1028 00:14:36.819870 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 00:14:38.936570 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:42774.service - OpenSSH per-connection server daemon (10.0.0.1:42774). Oct 28 00:14:39.004863 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 42774 ssh2: RSA SHA256:g8Zd8J2MZfnw30Pjs8lTk1SCbX6nv6fLNQxg9VuxtKs Oct 28 00:14:39.006493 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 00:14:39.010992 systemd-logind[1578]: New session 24 of user core. Oct 28 00:14:39.025657 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 28 00:14:39.142653 sshd[5248]: Connection closed by 10.0.0.1 port 42774 Oct 28 00:14:39.143043 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Oct 28 00:14:39.147453 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:42774.service: Deactivated successfully. Oct 28 00:14:39.149568 systemd[1]: session-24.scope: Deactivated successfully. Oct 28 00:14:39.150361 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Oct 28 00:14:39.151472 systemd-logind[1578]: Removed session 24.