Nov 23 23:04:34.770030 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:04:34.770061 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 23:04:34.770072 kernel: KASLR enabled Nov 23 23:04:34.770077 kernel: efi: EFI v2.7 by EDK II Nov 23 23:04:34.770083 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Nov 23 23:04:34.770088 kernel: random: crng init done Nov 23 23:04:34.770095 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 23 23:04:34.770100 kernel: secureboot: Secure boot enabled Nov 23 23:04:34.770106 kernel: ACPI: Early table checksum verification disabled Nov 23 23:04:34.770113 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Nov 23 23:04:34.770119 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:04:34.770125 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770130 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770136 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770143 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770151 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770157 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770163 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770169 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770175 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:04:34.770181 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 23 23:04:34.770188 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:04:34.770194 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:04:34.770200 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Nov 23 23:04:34.770207 kernel: Zone ranges: Nov 23 23:04:34.770215 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:04:34.770222 kernel: DMA32 empty Nov 23 23:04:34.770228 kernel: Normal empty Nov 23 23:04:34.770234 kernel: Device empty Nov 23 23:04:34.770240 kernel: Movable zone start for each node Nov 23 23:04:34.770246 kernel: Early memory node ranges Nov 23 23:04:34.770253 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Nov 23 23:04:34.770259 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Nov 23 23:04:34.770265 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Nov 23 23:04:34.770272 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Nov 23 23:04:34.770278 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Nov 23 23:04:34.770284 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Nov 23 23:04:34.770291 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Nov 23 23:04:34.770297 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Nov 23 23:04:34.770303 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 23 23:04:34.770312 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:04:34.770318 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 23 23:04:34.770324 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Nov 23 23:04:34.770331 kernel: psci: probing for conduit method from ACPI. Nov 23 23:04:34.770339 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:04:34.770345 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:04:34.770351 kernel: psci: Trusted OS migration not required Nov 23 23:04:34.770358 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:04:34.770364 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:04:34.770371 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:04:34.770377 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:04:34.770383 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 23 23:04:34.770390 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:04:34.770397 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:04:34.770403 kernel: CPU features: detected: Spectre-v4 Nov 23 23:04:34.770410 kernel: CPU features: detected: Spectre-BHB Nov 23 23:04:34.770416 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:04:34.770423 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:04:34.770429 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:04:34.770435 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:04:34.770446 kernel: alternatives: applying boot alternatives Nov 23 23:04:34.770454 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 23:04:34.770461 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:04:34.770467 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:04:34.770476 kernel: Fallback order for Node 0: 0 Nov 23 23:04:34.770482 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 23 23:04:34.770488 kernel: Policy zone: DMA Nov 23 23:04:34.770505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:04:34.770512 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 23 23:04:34.770518 kernel: software IO TLB: area num 4. Nov 23 23:04:34.770524 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 23 23:04:34.770531 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Nov 23 23:04:34.770537 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 23 23:04:34.770543 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:04:34.770550 kernel: rcu: RCU event tracing is enabled. Nov 23 23:04:34.770557 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 23 23:04:34.770565 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:04:34.770571 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:04:34.770577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:04:34.770584 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 23 23:04:34.770590 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:04:34.770597 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:04:34.770603 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:04:34.770609 kernel: GICv3: 256 SPIs implemented Nov 23 23:04:34.770616 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:04:34.770622 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:04:34.770628 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:04:34.770635 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:04:34.770642 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:04:34.770649 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:04:34.770655 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:04:34.770662 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:04:34.770668 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 23 23:04:34.770675 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 23 23:04:34.770681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:04:34.770687 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:04:34.770694 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:04:34.770700 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:04:34.770707 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:04:34.770715 kernel: arm-pv: using stolen time PV Nov 23 23:04:34.770721 kernel: Console: colour dummy device 80x25 Nov 23 23:04:34.770728 kernel: ACPI: Core revision 20240827 Nov 23 23:04:34.770735 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:04:34.770741 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:04:34.770748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:04:34.770754 kernel: landlock: Up and running. Nov 23 23:04:34.770761 kernel: SELinux: Initializing. Nov 23 23:04:34.770767 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:04:34.770776 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:04:34.770783 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:04:34.770789 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:04:34.770796 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:04:34.770802 kernel: Remapping and enabling EFI services. Nov 23 23:04:34.770809 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:04:34.770820 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:04:34.770829 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:04:34.770854 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 23 23:04:34.770863 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:04:34.770875 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:04:34.770882 kernel: Detected PIPT I-cache on CPU2 Nov 23 23:04:34.770890 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 23 23:04:34.770897 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 23 23:04:34.770904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:04:34.770911 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 23 23:04:34.770918 kernel: Detected PIPT I-cache on CPU3 Nov 23 23:04:34.770927 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 23 23:04:34.770934 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 23 23:04:34.770941 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:04:34.770948 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 23 23:04:34.770954 kernel: smp: Brought up 1 node, 4 CPUs Nov 23 23:04:34.770961 kernel: SMP: Total of 4 processors activated. Nov 23 23:04:34.770968 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:04:34.770975 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:04:34.770982 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:04:34.770989 kernel: CPU features: detected: Common not Private translations Nov 23 23:04:34.770998 kernel: CPU features: detected: CRC32 instructions Nov 23 23:04:34.771005 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:04:34.771012 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:04:34.771019 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:04:34.771025 kernel: CPU features: detected: Privileged Access Never Nov 23 23:04:34.771096 kernel: CPU features: detected: RAS Extension Support Nov 23 23:04:34.771108 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:04:34.771116 kernel: alternatives: applying system-wide alternatives Nov 23 23:04:34.771122 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 23 23:04:34.771134 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Nov 23 23:04:34.771141 kernel: devtmpfs: initialized Nov 23 23:04:34.771148 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:04:34.771155 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 23 23:04:34.771162 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:04:34.771169 kernel: 0 pages in range for non-PLT usage Nov 23 23:04:34.771176 kernel: 508400 pages in range for PLT usage Nov 23 23:04:34.771183 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:04:34.771190 kernel: SMBIOS 3.0.0 present. Nov 23 23:04:34.771198 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 23 23:04:34.771243 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:04:34.771250 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:04:34.771258 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:04:34.771265 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:04:34.771272 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:04:34.771278 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:04:34.771286 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 23 23:04:34.771292 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:04:34.771303 kernel: cpuidle: using governor menu Nov 23 23:04:34.771309 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:04:34.771316 kernel: ASID allocator initialised with 32768 entries Nov 23 23:04:34.771323 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:04:34.771330 kernel: Serial: AMBA PL011 UART driver Nov 23 23:04:34.771337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:04:34.771344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:04:34.771351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:04:34.771358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:04:34.771367 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:04:34.771374 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:04:34.771381 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:04:34.771388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:04:34.771394 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:04:34.771401 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:04:34.771409 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:04:34.771416 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:04:34.771423 kernel: ACPI: Interpreter enabled Nov 23 23:04:34.771431 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:04:34.771438 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:04:34.771445 kernel: ACPI: CPU0 has been hot-added Nov 23 23:04:34.771452 kernel: ACPI: CPU1 has been hot-added Nov 23 23:04:34.771459 kernel: ACPI: CPU2 has been hot-added Nov 23 23:04:34.771466 kernel: ACPI: CPU3 has been hot-added Nov 23 23:04:34.771473 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:04:34.771480 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:04:34.771487 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:04:34.771659 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:04:34.771725 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:04:34.771787 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:04:34.771849 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:04:34.771907 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:04:34.771917 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:04:34.771924 kernel: PCI host bridge to bus 0000:00 Nov 23 23:04:34.771996 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:04:34.772065 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:04:34.772137 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:04:34.772190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:04:34.772275 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:04:34.772346 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 23 23:04:34.772410 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 23 23:04:34.772470 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 23 23:04:34.772618 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:04:34.772682 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:04:34.772806 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 23 23:04:34.772877 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 23 23:04:34.772934 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:04:34.772987 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:04:34.773061 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:04:34.773072 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:04:34.773080 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:04:34.773088 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:04:34.773095 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:04:34.773102 kernel: iommu: Default domain type: Translated Nov 23 23:04:34.773110 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:04:34.773117 kernel: efivars: Registered efivars operations Nov 23 23:04:34.773126 kernel: vgaarb: loaded Nov 23 23:04:34.773133 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:04:34.773140 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:04:34.773148 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:04:34.773154 kernel: pnp: PnP ACPI init Nov 23 23:04:34.773232 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:04:34.773243 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:04:34.773250 kernel: NET: Registered PF_INET protocol family Nov 23 23:04:34.773259 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:04:34.773267 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:04:34.773274 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:04:34.773281 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:04:34.773289 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:04:34.773296 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:04:34.773303 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:04:34.773311 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:04:34.773318 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:04:34.773327 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:04:34.773334 kernel: kvm [1]: HYP mode not available Nov 23 23:04:34.773341 kernel: Initialise system trusted keyrings Nov 23 23:04:34.773348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:04:34.773355 kernel: Key type asymmetric registered Nov 23 23:04:34.773362 kernel: Asymmetric key parser 'x509' registered Nov 23 23:04:34.773370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:04:34.773377 kernel: io scheduler mq-deadline registered Nov 23 23:04:34.773384 kernel: io scheduler kyber registered Nov 23 23:04:34.773391 kernel: io scheduler bfq registered Nov 23 23:04:34.773399 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:04:34.773407 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:04:34.773414 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:04:34.773476 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 23 23:04:34.773486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:04:34.773523 kernel: thunder_xcv, ver 1.0 Nov 23 23:04:34.773531 kernel: thunder_bgx, ver 1.0 Nov 23 23:04:34.773538 kernel: nicpf, ver 1.0 Nov 23 23:04:34.773547 kernel: nicvf, ver 1.0 Nov 23 23:04:34.773622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:04:34.773681 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:04:34 UTC (1763939074) Nov 23 23:04:34.773691 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:04:34.773698 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:04:34.773705 kernel: watchdog: NMI not fully supported Nov 23 23:04:34.773712 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:04:34.773719 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:04:34.773728 kernel: Segment Routing with IPv6 Nov 23 23:04:34.773735 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:04:34.773741 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:04:34.773748 kernel: Key type dns_resolver registered Nov 23 23:04:34.773755 kernel: registered taskstats version 1 Nov 23 23:04:34.773763 kernel: Loading compiled-in X.509 certificates Nov 23 23:04:34.773770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 23:04:34.773778 kernel: Demotion targets for Node 0: null Nov 23 23:04:34.773785 kernel: Key type .fscrypt registered Nov 23 23:04:34.773793 kernel: Key type fscrypt-provisioning registered Nov 23 23:04:34.773800 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:04:34.773808 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:04:34.773815 kernel: ima: No architecture policies found Nov 23 23:04:34.773822 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:04:34.773829 kernel: clk: Disabling unused clocks Nov 23 23:04:34.773835 kernel: PM: genpd: Disabling unused power domains Nov 23 23:04:34.773842 kernel: Warning: unable to open an initial console. Nov 23 23:04:34.773849 kernel: Freeing unused kernel memory: 39552K Nov 23 23:04:34.773858 kernel: Run /init as init process Nov 23 23:04:34.773865 kernel: with arguments: Nov 23 23:04:34.773871 kernel: /init Nov 23 23:04:34.773878 kernel: with environment: Nov 23 23:04:34.773885 kernel: HOME=/ Nov 23 23:04:34.773892 kernel: TERM=linux Nov 23 23:04:34.773900 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:04:34.773910 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:04:34.773920 systemd[1]: Detected virtualization kvm. Nov 23 23:04:34.773928 systemd[1]: Detected architecture arm64. Nov 23 23:04:34.773935 systemd[1]: Running in initrd. Nov 23 23:04:34.773942 systemd[1]: No hostname configured, using default hostname. Nov 23 23:04:34.773950 systemd[1]: Hostname set to . Nov 23 23:04:34.773958 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:04:34.773966 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:04:34.773973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:04:34.773983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:04:34.773991 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:04:34.773999 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:04:34.774007 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:04:34.774015 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:04:34.774024 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:04:34.774040 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:04:34.774049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:04:34.774056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:04:34.774064 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:04:34.774071 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:04:34.774079 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:04:34.774086 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:04:34.774094 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:04:34.774102 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:04:34.774111 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:04:34.774118 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:04:34.774126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:04:34.774134 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:04:34.774141 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:04:34.774149 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:04:34.774157 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:04:34.774164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:04:34.774173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:04:34.774182 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:04:34.774190 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:04:34.774197 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:04:34.774205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:04:34.774212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:04:34.774220 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:04:34.774229 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:04:34.774237 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:04:34.774264 systemd-journald[243]: Collecting audit messages is disabled. Nov 23 23:04:34.774287 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:04:34.774296 systemd-journald[243]: Journal started Nov 23 23:04:34.774315 systemd-journald[243]: Runtime Journal (/run/log/journal/bac2178e7862474db5b25b3a61aaff7f) is 6M, max 48.5M, 42.4M free. Nov 23 23:04:34.767128 systemd-modules-load[244]: Inserted module 'overlay' Nov 23 23:04:34.778300 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:04:34.781513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:04:34.782894 systemd-modules-load[244]: Inserted module 'br_netfilter' Nov 23 23:04:34.783820 kernel: Bridge firewalling registered Nov 23 23:04:34.784668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:04:34.786171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:04:34.790048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:04:34.792683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:04:34.794285 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:04:34.803660 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:04:34.806309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:04:34.811713 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:04:34.813724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:04:34.815176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:04:34.824690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:04:34.825927 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:04:34.828126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:04:34.830694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:04:34.857663 systemd-resolved[291]: Positive Trust Anchors: Nov 23 23:04:34.857684 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:04:34.857714 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:04:34.867050 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 23:04:34.862714 systemd-resolved[291]: Defaulting to hostname 'linux'. Nov 23 23:04:34.863747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:04:34.867602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:04:34.942530 kernel: SCSI subsystem initialized Nov 23 23:04:34.946517 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:04:34.954545 kernel: iscsi: registered transport (tcp) Nov 23 23:04:34.967782 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:04:34.967823 kernel: QLogic iSCSI HBA Driver Nov 23 23:04:34.985790 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:04:35.006536 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:04:35.008685 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:04:35.057924 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:04:35.060474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:04:35.121544 kernel: raid6: neonx8 gen() 15118 MB/s Nov 23 23:04:35.138528 kernel: raid6: neonx4 gen() 15714 MB/s Nov 23 23:04:35.155548 kernel: raid6: neonx2 gen() 13078 MB/s Nov 23 23:04:35.172521 kernel: raid6: neonx1 gen() 10464 MB/s Nov 23 23:04:35.189550 kernel: raid6: int64x8 gen() 6824 MB/s Nov 23 23:04:35.206589 kernel: raid6: int64x4 gen() 7341 MB/s Nov 23 23:04:35.223549 kernel: raid6: int64x2 gen() 5922 MB/s Nov 23 23:04:35.240711 kernel: raid6: int64x1 gen() 5025 MB/s Nov 23 23:04:35.240776 kernel: raid6: using algorithm neonx4 gen() 15714 MB/s Nov 23 23:04:35.258617 kernel: raid6: .... xor() 12268 MB/s, rmw enabled Nov 23 23:04:35.258683 kernel: raid6: using neon recovery algorithm Nov 23 23:04:35.263529 kernel: xor: measuring software checksum speed Nov 23 23:04:35.263595 kernel: 8regs : 18873 MB/sec Nov 23 23:04:35.264705 kernel: 32regs : 21699 MB/sec Nov 23 23:04:35.265943 kernel: arm64_neon : 28061 MB/sec Nov 23 23:04:35.265967 kernel: xor: using function: arm64_neon (28061 MB/sec) Nov 23 23:04:35.317547 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:04:35.325027 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:04:35.327879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:04:35.364204 systemd-udevd[502]: Using default interface naming scheme 'v255'. Nov 23 23:04:35.368331 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:04:35.370748 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:04:35.401276 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Nov 23 23:04:35.425854 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:04:35.428300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:04:35.482990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:04:35.485150 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:04:35.537554 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 23 23:04:35.549112 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 23 23:04:35.552126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:04:35.552243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:04:35.559894 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:04:35.565714 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:04:35.565738 kernel: GPT:9289727 != 19775487 Nov 23 23:04:35.565748 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:04:35.565757 kernel: GPT:9289727 != 19775487 Nov 23 23:04:35.565766 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:04:35.565775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:04:35.566061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:04:35.594300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:04:35.608803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 23 23:04:35.611274 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:04:35.617526 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 23 23:04:35.618797 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 23 23:04:35.627918 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 23 23:04:35.635512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:04:35.636654 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:04:35.638575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:04:35.640578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:04:35.643282 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:04:35.645093 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:04:35.660338 disk-uuid[594]: Primary Header is updated. Nov 23 23:04:35.660338 disk-uuid[594]: Secondary Entries is updated. Nov 23 23:04:35.660338 disk-uuid[594]: Secondary Header is updated. Nov 23 23:04:35.664220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:04:35.664740 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:04:36.672636 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:04:36.672933 disk-uuid[599]: The operation has completed successfully. Nov 23 23:04:36.711826 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:04:36.711937 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:04:36.731561 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:04:36.754684 sh[613]: Success Nov 23 23:04:36.766901 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:04:36.766946 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:04:36.767964 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:04:36.774532 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:04:36.803883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:04:36.807272 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:04:36.826727 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:04:36.834549 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (625) Nov 23 23:04:36.834598 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 23:04:36.834609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:04:36.839515 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:04:36.839544 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:04:36.840695 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:04:36.842081 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:04:36.843359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:04:36.844252 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:04:36.845926 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:04:36.868523 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Nov 23 23:04:36.871098 kernel: BTRFS info (device vda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 23:04:36.871155 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:04:36.874034 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:04:36.874087 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:04:36.879514 kernel: BTRFS info (device vda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 23:04:36.881537 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:04:36.883653 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:04:36.949195 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:04:36.952377 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:04:36.997076 systemd-networkd[798]: lo: Link UP Nov 23 23:04:36.997090 systemd-networkd[798]: lo: Gained carrier Nov 23 23:04:36.997925 systemd-networkd[798]: Enumeration completed Nov 23 23:04:36.998216 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:04:37.000697 ignition[703]: Ignition 2.22.0 Nov 23 23:04:36.998415 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:04:37.000705 ignition[703]: Stage: fetch-offline Nov 23 23:04:36.998419 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:04:37.000739 ignition[703]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:04:36.999184 systemd-networkd[798]: eth0: Link UP Nov 23 23:04:37.000747 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:04:36.999306 systemd-networkd[798]: eth0: Gained carrier Nov 23 23:04:37.000841 ignition[703]: parsed url from cmdline: "" Nov 23 23:04:36.999315 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:04:37.000995 ignition[703]: no config URL provided Nov 23 23:04:37.000924 systemd[1]: Reached target network.target - Network. Nov 23 23:04:37.001002 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:04:37.016584 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:04:37.001012 ignition[703]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:04:37.001044 ignition[703]: op(1): [started] loading QEMU firmware config module Nov 23 23:04:37.001048 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 23 23:04:37.010201 ignition[703]: op(1): [finished] loading QEMU firmware config module Nov 23 23:04:37.010225 ignition[703]: QEMU firmware config was not found. Ignoring... Nov 23 23:04:37.063240 ignition[703]: parsing config with SHA512: 3195f5fc16a3110f614109bb11785de458b0b1872a20ddc1c8cd5b7b3b226cea73b1ddcae08815986b4c8d0b64929b6632eb25d11b7fb93742b38bf2a7e3182c Nov 23 23:04:37.067550 unknown[703]: fetched base config from "system" Nov 23 23:04:37.067562 unknown[703]: fetched user config from "qemu" Nov 23 23:04:37.067923 ignition[703]: fetch-offline: fetch-offline passed Nov 23 23:04:37.069414 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:04:37.067988 ignition[703]: Ignition finished successfully Nov 23 23:04:37.071757 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 23 23:04:37.072686 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:04:37.118559 ignition[812]: Ignition 2.22.0 Nov 23 23:04:37.118575 ignition[812]: Stage: kargs Nov 23 23:04:37.118714 ignition[812]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:04:37.118723 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:04:37.119491 ignition[812]: kargs: kargs passed Nov 23 23:04:37.119557 ignition[812]: Ignition finished successfully Nov 23 23:04:37.123308 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:04:37.125469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:04:37.159042 ignition[820]: Ignition 2.22.0 Nov 23 23:04:37.159057 ignition[820]: Stage: disks Nov 23 23:04:37.159220 ignition[820]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:04:37.159229 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:04:37.160041 ignition[820]: disks: disks passed Nov 23 23:04:37.162068 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:04:37.160094 ignition[820]: Ignition finished successfully Nov 23 23:04:37.163375 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:04:37.164573 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:04:37.166478 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:04:37.168002 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:04:37.169690 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:04:37.172803 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:04:37.203584 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 23:04:37.208360 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:04:37.211223 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:04:37.275514 kernel: EXT4-fs (vda9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 23:04:37.275963 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:04:37.277317 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:04:37.279839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:04:37.281668 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:04:37.282663 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 23:04:37.282715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:04:37.282740 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:04:37.302310 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:04:37.305089 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:04:37.310607 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Nov 23 23:04:37.310681 kernel: BTRFS info (device vda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 23:04:37.310709 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:04:37.313611 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:04:37.313647 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:04:37.315087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:04:37.344244 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:04:37.349269 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:04:37.354029 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:04:37.357799 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:04:37.433554 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:04:37.435960 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:04:37.437713 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:04:37.457524 kernel: BTRFS info (device vda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 23:04:37.468531 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:04:37.479871 ignition[951]: INFO : Ignition 2.22.0 Nov 23 23:04:37.479871 ignition[951]: INFO : Stage: mount Nov 23 23:04:37.481536 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:04:37.481536 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:04:37.481536 ignition[951]: INFO : mount: mount passed Nov 23 23:04:37.481536 ignition[951]: INFO : Ignition finished successfully Nov 23 23:04:37.483775 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:04:37.487816 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:04:37.832887 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:04:37.834398 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:04:37.860536 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Nov 23 23:04:37.860581 kernel: BTRFS info (device vda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 23:04:37.862505 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:04:37.864850 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:04:37.864870 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:04:37.866288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:04:37.900094 ignition[981]: INFO : Ignition 2.22.0 Nov 23 23:04:37.900094 ignition[981]: INFO : Stage: files Nov 23 23:04:37.901989 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:04:37.901989 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:04:37.901989 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:04:37.901989 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:04:37.901989 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:04:37.908415 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:04:37.908415 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:04:37.908415 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:04:37.908415 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:04:37.908415 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 23:04:37.904248 unknown[981]: wrote ssh authorized keys file for user: core Nov 23 23:04:37.943249 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:04:38.037695 systemd-networkd[798]: eth0: Gained IPv6LL Nov 23 23:04:38.093848 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:04:38.093848 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:04:38.097888 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:04:38.114730 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:04:38.114730 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:04:38.114730 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 23:04:38.374988 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:04:38.642910 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:04:38.642910 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 23:04:38.646571 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 23 23:04:38.662756 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:04:38.666650 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:04:38.668586 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 23 23:04:38.668586 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:04:38.668586 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:04:38.668586 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:04:38.668586 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:04:38.668586 ignition[981]: INFO : files: files passed Nov 23 23:04:38.668586 ignition[981]: INFO : Ignition finished successfully Nov 23 23:04:38.670476 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:04:38.673878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:04:38.676873 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:04:38.692633 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:04:38.692736 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:04:38.695881 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Nov 23 23:04:38.698639 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:04:38.700700 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:04:38.702009 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:04:38.701344 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:04:38.703223 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:04:38.706067 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:04:38.747572 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:04:38.747712 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:04:38.749817 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:04:38.751448 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:04:38.753309 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:04:38.754169 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:04:38.783182 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:04:38.786669 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:04:38.811879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:04:38.813059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:04:38.815239 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:04:38.816873 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:04:38.817005 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:04:38.819315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:04:38.821200 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:04:38.822748 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:04:38.824282 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:04:38.826155 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:04:38.827928 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:04:38.829693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:04:38.831559 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:04:38.833441 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:04:38.835328 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:04:38.836936 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:04:38.838264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:04:38.838392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:04:38.840544 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:04:38.842359 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:04:38.844230 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:04:38.844315 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:04:38.846243 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:04:38.846368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:04:38.849045 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:04:38.849174 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:04:38.850900 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:04:38.852523 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:04:38.856535 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:04:38.857726 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:04:38.859641 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:04:38.861077 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:04:38.861168 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:04:38.862762 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:04:38.862842 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:04:38.864384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:04:38.864536 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:04:38.866114 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:04:38.866216 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:04:38.868403 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:04:38.870134 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:04:38.870264 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:04:38.882111 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:04:38.883084 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:04:38.883217 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:04:38.885205 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:04:38.885310 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:04:38.891946 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:04:38.892092 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:04:38.897685 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:04:38.901164 ignition[1037]: INFO : Ignition 2.22.0 Nov 23 23:04:38.901164 ignition[1037]: INFO : Stage: umount Nov 23 23:04:38.902737 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:04:38.902737 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:04:38.902737 ignition[1037]: INFO : umount: umount passed Nov 23 23:04:38.902737 ignition[1037]: INFO : Ignition finished successfully Nov 23 23:04:38.905256 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:04:38.905376 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:04:38.907214 systemd[1]: Stopped target network.target - Network. Nov 23 23:04:38.908548 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:04:38.908615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:04:38.910245 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:04:38.910289 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:04:38.911688 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:04:38.911738 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:04:38.913276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:04:38.913318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:04:38.915965 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:04:38.919667 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:04:38.927582 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:04:38.927754 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:04:38.931309 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:04:38.931669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:04:38.931710 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:04:38.935310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:04:38.935543 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:04:38.935670 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:04:38.940298 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:04:38.940801 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:04:38.943622 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:04:38.943674 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:04:38.946849 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:04:38.948549 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:04:38.948614 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:04:38.950942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:04:38.950990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:04:38.954113 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:04:38.954160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:04:38.956261 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:04:38.959676 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:04:38.964371 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:04:38.964490 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:04:38.966554 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:04:38.966599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:04:38.978280 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:04:38.978451 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:04:38.980625 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:04:38.980739 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:04:38.982921 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:04:38.982984 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:04:38.984147 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:04:38.984181 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:04:38.985687 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:04:38.985738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:04:38.988185 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:04:38.988236 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:04:38.990708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:04:38.990759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:04:38.996420 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:04:38.998229 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:04:38.998298 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:04:39.001191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:04:39.001236 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:04:39.004544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:04:39.004587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:04:39.020657 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:04:39.021634 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:04:39.022986 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:04:39.025657 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:04:39.060660 systemd[1]: Switching root. Nov 23 23:04:39.108044 systemd-journald[243]: Journal stopped Nov 23 23:04:39.945800 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Nov 23 23:04:39.945856 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:04:39.945867 kernel: SELinux: policy capability open_perms=1 Nov 23 23:04:39.945876 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:04:39.945885 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:04:39.945895 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:04:39.945904 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:04:39.945915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:04:39.945927 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:04:39.945937 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:04:39.945947 kernel: audit: type=1403 audit(1763939079.288:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:04:39.945965 systemd[1]: Successfully loaded SELinux policy in 62.515ms. Nov 23 23:04:39.945984 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.988ms. Nov 23 23:04:39.945996 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:04:39.946019 systemd[1]: Detected virtualization kvm. Nov 23 23:04:39.946031 systemd[1]: Detected architecture arm64. Nov 23 23:04:39.946046 systemd[1]: Detected first boot. Nov 23 23:04:39.946056 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:04:39.946067 zram_generator::config[1083]: No configuration found. Nov 23 23:04:39.946081 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:04:39.946091 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:04:39.946102 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:04:39.946112 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:04:39.946123 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:04:39.946133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:04:39.946144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:04:39.946154 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:04:39.946166 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:04:39.946176 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:04:39.946186 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:04:39.946196 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:04:39.946207 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:04:39.946216 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:04:39.946227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:04:39.946238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:04:39.946248 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:04:39.946261 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:04:39.946272 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:04:39.946282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:04:39.946292 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:04:39.946302 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:04:39.946319 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:04:39.946329 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:04:39.946340 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:04:39.946352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:04:39.946363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:04:39.946374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:04:39.946385 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:04:39.946395 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:04:39.946406 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:04:39.946416 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:04:39.946426 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:04:39.946436 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:04:39.946449 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:04:39.946460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:04:39.946470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:04:39.946480 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:04:39.946490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:04:39.946515 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:04:39.946526 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:04:39.946537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:04:39.946547 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:04:39.946564 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:04:39.946574 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:04:39.946585 systemd[1]: Reached target machines.target - Containers. Nov 23 23:04:39.946596 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:04:39.946608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:04:39.946619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:04:39.946629 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:04:39.946639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:04:39.946651 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:04:39.946661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:04:39.946672 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:04:39.946683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:04:39.946693 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:04:39.946703 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:04:39.946713 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:04:39.946723 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:04:39.946732 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:04:39.946744 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:04:39.946755 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:04:39.946765 kernel: fuse: init (API version 7.41) Nov 23 23:04:39.946775 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:04:39.946786 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:04:39.946797 kernel: ACPI: bus type drm_connector registered Nov 23 23:04:39.946807 kernel: loop: module loaded Nov 23 23:04:39.946817 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:04:39.946829 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:04:39.946841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:04:39.946851 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:04:39.946863 systemd[1]: Stopped verity-setup.service. Nov 23 23:04:39.946873 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:04:39.946883 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:04:39.946895 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:04:39.946905 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:04:39.946915 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:04:39.946950 systemd-journald[1155]: Collecting audit messages is disabled. Nov 23 23:04:39.946974 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:04:39.946985 systemd-journald[1155]: Journal started Nov 23 23:04:39.947014 systemd-journald[1155]: Runtime Journal (/run/log/journal/bac2178e7862474db5b25b3a61aaff7f) is 6M, max 48.5M, 42.4M free. Nov 23 23:04:39.703533 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:04:39.726597 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 23 23:04:39.726995 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:04:39.950515 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:04:39.951398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:04:39.952933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:04:39.954613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:04:39.954852 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:04:39.956306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:04:39.956475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:04:39.957960 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:04:39.958155 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:04:39.959800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:04:39.961541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:04:39.963054 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:04:39.963245 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:04:39.964827 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:04:39.965028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:04:39.966475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:04:39.967988 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:04:39.969601 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:04:39.971383 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:04:39.983637 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:04:39.986094 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:04:39.988348 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:04:39.989673 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:04:39.989713 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:04:39.991688 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:04:39.997641 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:04:39.998878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:04:40.000514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:04:40.002902 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:04:40.005816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:04:40.010581 systemd-journald[1155]: Time spent on flushing to /var/log/journal/bac2178e7862474db5b25b3a61aaff7f is 17.400ms for 876 entries. Nov 23 23:04:40.010581 systemd-journald[1155]: System Journal (/var/log/journal/bac2178e7862474db5b25b3a61aaff7f) is 8M, max 195.6M, 187.6M free. Nov 23 23:04:40.044778 systemd-journald[1155]: Received client request to flush runtime journal. Nov 23 23:04:40.044835 kernel: loop0: detected capacity change from 0 to 119840 Nov 23 23:04:40.009648 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:04:40.015529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:04:40.017660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:04:40.020786 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:04:40.023550 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:04:40.032585 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:04:40.035472 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:04:40.036879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:04:40.049543 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:04:40.052240 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:04:40.055372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:04:40.061244 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:04:40.064317 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:04:40.066515 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:04:40.079566 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:04:40.083623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:04:40.085954 kernel: loop1: detected capacity change from 0 to 207008 Nov 23 23:04:40.096735 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:04:40.111929 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Nov 23 23:04:40.111949 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Nov 23 23:04:40.116132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:04:40.123514 kernel: loop2: detected capacity change from 0 to 100632 Nov 23 23:04:40.169540 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 23:04:40.179559 kernel: loop4: detected capacity change from 0 to 207008 Nov 23 23:04:40.187551 kernel: loop5: detected capacity change from 0 to 100632 Nov 23 23:04:40.196145 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 23 23:04:40.196567 (sd-merge)[1223]: Merged extensions into '/usr'. Nov 23 23:04:40.202610 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:04:40.202632 systemd[1]: Reloading... Nov 23 23:04:40.258518 zram_generator::config[1250]: No configuration found. Nov 23 23:04:40.349602 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:04:40.397345 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:04:40.397738 systemd[1]: Reloading finished in 194 ms. Nov 23 23:04:40.432377 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:04:40.434031 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:04:40.450834 systemd[1]: Starting ensure-sysext.service... Nov 23 23:04:40.452778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:04:40.462770 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:04:40.462791 systemd[1]: Reloading... Nov 23 23:04:40.468060 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:04:40.468471 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:04:40.468825 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:04:40.469140 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:04:40.470111 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:04:40.470433 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 23 23:04:40.470553 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 23 23:04:40.474637 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:04:40.474971 systemd-tmpfiles[1286]: Skipping /boot Nov 23 23:04:40.481463 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:04:40.481661 systemd-tmpfiles[1286]: Skipping /boot Nov 23 23:04:40.515526 zram_generator::config[1313]: No configuration found. Nov 23 23:04:40.648115 systemd[1]: Reloading finished in 185 ms. Nov 23 23:04:40.667455 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:04:40.674581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:04:40.685627 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:04:40.688265 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:04:40.703680 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:04:40.707254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:04:40.710025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:04:40.712751 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:04:40.720815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:04:40.724547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:04:40.726900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:04:40.732925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:04:40.734434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:04:40.734580 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:04:40.736758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:04:40.738539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:04:40.740760 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:04:40.743136 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:04:40.743296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:04:40.752641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:04:40.753981 augenrules[1380]: No rules Nov 23 23:04:40.754253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:04:40.760932 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Nov 23 23:04:40.765522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:04:40.766680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:04:40.766812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:04:40.768437 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:04:40.772761 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:04:40.777167 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:04:40.777369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:04:40.778981 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:04:40.781045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:04:40.782986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:04:40.783169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:04:40.785128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:04:40.785281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:04:40.786646 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:04:40.788364 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:04:40.789031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:04:40.790841 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:04:40.824545 systemd[1]: Finished ensure-sysext.service. Nov 23 23:04:40.827267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:04:40.838668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:04:40.841771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:04:40.845970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:04:40.849485 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:04:40.852692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:04:40.857916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:04:40.859823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:04:40.859871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:04:40.861522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:04:40.864715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:04:40.865923 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:04:40.867582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:04:40.869574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:04:40.871396 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:04:40.871610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:04:40.872833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:04:40.872976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:04:40.875025 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:04:40.875213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:04:40.881024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:04:40.881093 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:04:40.881951 augenrules[1428]: /sbin/augenrules: No change Nov 23 23:04:40.893658 augenrules[1457]: No rules Nov 23 23:04:40.896606 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:04:40.897584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:04:40.901016 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:04:40.913378 systemd-resolved[1353]: Positive Trust Anchors: Nov 23 23:04:40.913394 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:04:40.913426 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:04:40.919428 systemd-resolved[1353]: Defaulting to hostname 'linux'. Nov 23 23:04:40.926246 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:04:40.928683 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:04:40.958176 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:04:40.959631 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:04:40.960765 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:04:40.961985 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:04:40.963264 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:04:40.964583 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:04:40.964622 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:04:40.964864 systemd-networkd[1440]: lo: Link UP Nov 23 23:04:40.964878 systemd-networkd[1440]: lo: Gained carrier Nov 23 23:04:40.965422 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:04:40.966625 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:04:40.966676 systemd-networkd[1440]: Enumeration completed Nov 23 23:04:40.967152 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:04:40.967160 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:04:40.967674 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:04:40.968450 systemd-networkd[1440]: eth0: Link UP Nov 23 23:04:40.968587 systemd-networkd[1440]: eth0: Gained carrier Nov 23 23:04:40.968604 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:04:40.968833 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:04:40.970774 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:04:40.973714 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:04:40.977030 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:04:40.978600 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:04:40.979717 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:04:40.981579 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:04:40.982636 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:04:40.982886 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Nov 23 23:04:40.983458 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 23 23:04:40.983530 systemd-timesyncd[1441]: Initial clock synchronization to Sun 2025-11-23 23:04:41.321077 UTC. Nov 23 23:04:40.984189 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:04:40.986241 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:04:40.987530 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:04:40.990427 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:04:40.993216 systemd[1]: Reached target network.target - Network. Nov 23 23:04:40.994086 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:04:40.994974 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:04:40.995875 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:04:40.995908 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:04:40.996960 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:04:41.000663 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:04:41.002744 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:04:41.011987 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:04:41.014483 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:04:41.016071 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:04:41.017422 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:04:41.021076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:04:41.026294 jq[1477]: false Nov 23 23:04:41.024723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:04:41.027437 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:04:41.037807 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:04:41.044214 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:04:41.047209 extend-filesystems[1479]: Found /dev/vda6 Nov 23 23:04:41.047581 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:04:41.051854 extend-filesystems[1479]: Found /dev/vda9 Nov 23 23:04:41.053846 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:04:41.054938 extend-filesystems[1479]: Checking size of /dev/vda9 Nov 23 23:04:41.056107 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:04:41.057769 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:04:41.059201 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:04:41.068561 extend-filesystems[1479]: Resized partition /dev/vda9 Nov 23 23:04:41.070109 extend-filesystems[1516]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:04:41.074545 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 23 23:04:41.085844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:04:41.089462 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:04:41.093973 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:04:41.094200 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:04:41.094489 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:04:41.094693 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:04:41.097461 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:04:41.099586 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:04:41.100555 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 23 23:04:41.101873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:04:41.115889 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 23 23:04:41.115889 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 23 23:04:41.115889 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 23 23:04:41.120903 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Nov 23 23:04:41.119368 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:04:41.128664 jq[1520]: true Nov 23 23:04:41.119665 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:04:41.132384 tar[1522]: linux-arm64/LICENSE Nov 23 23:04:41.132384 tar[1522]: linux-arm64/helm Nov 23 23:04:41.138074 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:04:41.138252 dbus-daemon[1475]: [system] SELinux support is enabled Nov 23 23:04:41.138415 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:04:41.143370 update_engine[1506]: I20251123 23:04:41.143034 1506 main.cc:92] Flatcar Update Engine starting Nov 23 23:04:41.147836 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:04:41.151991 update_engine[1506]: I20251123 23:04:41.151798 1506 update_check_scheduler.cc:74] Next update check in 10m9s Nov 23 23:04:41.152903 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:04:41.156208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:04:41.156245 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:04:41.158387 jq[1536]: true Nov 23 23:04:41.159824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:04:41.161742 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:04:41.161777 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:04:41.167044 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:04:41.236546 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:04:41.240093 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:04:41.242221 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:04:41.263273 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:04:41.288878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:04:41.298787 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:04:41.299320 systemd-logind[1494]: New seat seat0. Nov 23 23:04:41.300168 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:04:41.342925 containerd[1523]: time="2025-11-23T23:04:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:04:41.344738 containerd[1523]: time="2025-11-23T23:04:41.343688750Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:04:41.355312 containerd[1523]: time="2025-11-23T23:04:41.355260465Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="36.185µs" Nov 23 23:04:41.355480 containerd[1523]: time="2025-11-23T23:04:41.355459778Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:04:41.355560 containerd[1523]: time="2025-11-23T23:04:41.355524937Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:04:41.355772 containerd[1523]: time="2025-11-23T23:04:41.355750888Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:04:41.355838 containerd[1523]: time="2025-11-23T23:04:41.355824135Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:04:41.355931 containerd[1523]: time="2025-11-23T23:04:41.355916433Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356053 containerd[1523]: time="2025-11-23T23:04:41.356033036Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356116 containerd[1523]: time="2025-11-23T23:04:41.356101071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356619 containerd[1523]: time="2025-11-23T23:04:41.356593078Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356690 containerd[1523]: time="2025-11-23T23:04:41.356676038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356764 containerd[1523]: time="2025-11-23T23:04:41.356749118Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356818 containerd[1523]: time="2025-11-23T23:04:41.356799895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:04:41.356953 containerd[1523]: time="2025-11-23T23:04:41.356936466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:04:41.357266 containerd[1523]: time="2025-11-23T23:04:41.357240542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:04:41.357355 containerd[1523]: time="2025-11-23T23:04:41.357340136Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:04:41.357403 containerd[1523]: time="2025-11-23T23:04:41.357390912Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:04:41.357497 containerd[1523]: time="2025-11-23T23:04:41.357484211Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:04:41.357858 containerd[1523]: time="2025-11-23T23:04:41.357839397Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:04:41.357997 containerd[1523]: time="2025-11-23T23:04:41.357978511Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:04:41.361762 containerd[1523]: time="2025-11-23T23:04:41.361734891Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:04:41.361883 containerd[1523]: time="2025-11-23T23:04:41.361869879Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:04:41.362001 containerd[1523]: time="2025-11-23T23:04:41.361983980Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:04:41.362088 containerd[1523]: time="2025-11-23T23:04:41.362073735Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:04:41.362165 containerd[1523]: time="2025-11-23T23:04:41.362150692Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:04:41.362215 containerd[1523]: time="2025-11-23T23:04:41.362203761Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:04:41.362266 containerd[1523]: time="2025-11-23T23:04:41.362254871Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:04:41.362319 containerd[1523]: time="2025-11-23T23:04:41.362306774Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:04:41.362367 containerd[1523]: time="2025-11-23T23:04:41.362356008Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:04:41.362417 containerd[1523]: time="2025-11-23T23:04:41.362405200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:04:41.362477 containerd[1523]: time="2025-11-23T23:04:41.362465732Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:04:41.362558 containerd[1523]: time="2025-11-23T23:04:41.362543564Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:04:41.362738 containerd[1523]: time="2025-11-23T23:04:41.362717280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:04:41.362810 containerd[1523]: time="2025-11-23T23:04:41.362797197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:04:41.362866 containerd[1523]: time="2025-11-23T23:04:41.362853226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:04:41.362921 containerd[1523]: time="2025-11-23T23:04:41.362908296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:04:41.362974 containerd[1523]: time="2025-11-23T23:04:41.362960782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:04:41.363046 containerd[1523]: time="2025-11-23T23:04:41.363032528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:04:41.363099 containerd[1523]: time="2025-11-23T23:04:41.363087223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:04:41.363156 containerd[1523]: time="2025-11-23T23:04:41.363144253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:04:41.363205 containerd[1523]: time="2025-11-23T23:04:41.363193862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:04:41.363260 containerd[1523]: time="2025-11-23T23:04:41.363248224Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:04:41.363309 containerd[1523]: time="2025-11-23T23:04:41.363298792Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:04:41.363676 containerd[1523]: time="2025-11-23T23:04:41.363657646Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:04:41.363748 containerd[1523]: time="2025-11-23T23:04:41.363735062Z" level=info msg="Start snapshots syncer" Nov 23 23:04:41.363823 containerd[1523]: time="2025-11-23T23:04:41.363810268Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:04:41.364330 containerd[1523]: time="2025-11-23T23:04:41.364289435Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:04:41.364510 containerd[1523]: time="2025-11-23T23:04:41.364491291Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:04:41.364667 containerd[1523]: time="2025-11-23T23:04:41.364650624Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.364991886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365023444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365042495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365056294Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365068592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365080348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365092063Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365118535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365131875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:04:41.365190 containerd[1523]: time="2025-11-23T23:04:41.365148384Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:04:41.365433 containerd[1523]: time="2025-11-23T23:04:41.365416482Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:04:41.365572 containerd[1523]: time="2025-11-23T23:04:41.365553554Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:04:41.365628 containerd[1523]: time="2025-11-23T23:04:41.365615170Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:04:41.365677 containerd[1523]: time="2025-11-23T23:04:41.365663820Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:04:41.365737 containerd[1523]: time="2025-11-23T23:04:41.365723768Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:04:41.365788 containerd[1523]: time="2025-11-23T23:04:41.365775837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:04:41.365859 containerd[1523]: time="2025-11-23T23:04:41.365842830Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:04:41.366068 containerd[1523]: time="2025-11-23T23:04:41.366055316Z" level=info msg="runtime interface created" Nov 23 23:04:41.366114 containerd[1523]: time="2025-11-23T23:04:41.366102800Z" level=info msg="created NRI interface" Nov 23 23:04:41.366177 containerd[1523]: time="2025-11-23T23:04:41.366164624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:04:41.366229 containerd[1523]: time="2025-11-23T23:04:41.366218860Z" level=info msg="Connect containerd service" Nov 23 23:04:41.366299 containerd[1523]: time="2025-11-23T23:04:41.366286020Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:04:41.367324 containerd[1523]: time="2025-11-23T23:04:41.367294214Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:04:41.437817 containerd[1523]: time="2025-11-23T23:04:41.437690263Z" level=info msg="Start subscribing containerd event" Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.437949690Z" level=info msg="Start recovering state" Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438047408Z" level=info msg="Start event monitor" Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438061207Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438069461Z" level=info msg="Start streaming server" Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438079883Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438087304Z" level=info msg="runtime interface starting up..." Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438093849Z" level=info msg="starting plugins..." Nov 23 23:04:41.438444 containerd[1523]: time="2025-11-23T23:04:41.438106939Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:04:41.438793 containerd[1523]: time="2025-11-23T23:04:41.438767201Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:04:41.438892 containerd[1523]: time="2025-11-23T23:04:41.438879468Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:04:41.439141 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:04:41.443884 containerd[1523]: time="2025-11-23T23:04:41.443835475Z" level=info msg="containerd successfully booted in 0.101327s" Nov 23 23:04:41.519471 tar[1522]: linux-arm64/README.md Nov 23 23:04:41.545179 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:04:42.261696 systemd-networkd[1440]: eth0: Gained IPv6LL Nov 23 23:04:42.267407 sshd_keygen[1509]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:04:42.268239 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:04:42.270015 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:04:42.274058 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 23 23:04:42.277699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:04:42.287655 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:04:42.289415 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:04:42.297127 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:04:42.305640 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 23 23:04:42.306117 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 23 23:04:42.308046 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:04:42.308370 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:04:42.310890 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:04:42.312417 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:04:42.314374 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:04:42.331703 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:04:42.334785 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:04:42.337291 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:04:42.338847 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:04:42.916088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:04:42.917822 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:04:42.919094 systemd[1]: Startup finished in 2.115s (kernel) + 4.672s (initrd) + 3.693s (userspace) = 10.481s. Nov 23 23:04:42.920325 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:04:43.283513 kubelet[1633]: E1123 23:04:43.283365 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:04:43.285817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:04:43.285962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:04:43.286289 systemd[1]: kubelet.service: Consumed 745ms CPU time, 255.8M memory peak. Nov 23 23:04:47.815179 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:04:47.816210 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:48720.service - OpenSSH per-connection server daemon (10.0.0.1:48720). Nov 23 23:04:47.881618 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 48720 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:47.883797 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:47.890208 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:04:47.891194 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:04:47.896571 systemd-logind[1494]: New session 1 of user core. Nov 23 23:04:47.916563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:04:47.919399 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:04:47.932905 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:04:47.935305 systemd-logind[1494]: New session c1 of user core. Nov 23 23:04:48.053147 systemd[1651]: Queued start job for default target default.target. Nov 23 23:04:48.069617 systemd[1651]: Created slice app.slice - User Application Slice. Nov 23 23:04:48.069647 systemd[1651]: Reached target paths.target - Paths. Nov 23 23:04:48.069685 systemd[1651]: Reached target timers.target - Timers. Nov 23 23:04:48.070943 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:04:48.081758 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:04:48.081884 systemd[1651]: Reached target sockets.target - Sockets. Nov 23 23:04:48.081930 systemd[1651]: Reached target basic.target - Basic System. Nov 23 23:04:48.081959 systemd[1651]: Reached target default.target - Main User Target. Nov 23 23:04:48.081990 systemd[1651]: Startup finished in 140ms. Nov 23 23:04:48.082093 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:04:48.084502 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:04:48.152706 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:48722.service - OpenSSH per-connection server daemon (10.0.0.1:48722). Nov 23 23:04:48.207024 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 48722 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:48.208583 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:48.212729 systemd-logind[1494]: New session 2 of user core. Nov 23 23:04:48.232790 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:04:48.290864 sshd[1665]: Connection closed by 10.0.0.1 port 48722 Nov 23 23:04:48.291604 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:48.310015 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:48722.service: Deactivated successfully. Nov 23 23:04:48.311936 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:04:48.314826 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:04:48.317039 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:48732.service - OpenSSH per-connection server daemon (10.0.0.1:48732). Nov 23 23:04:48.320608 systemd-logind[1494]: Removed session 2. Nov 23 23:04:48.394118 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 48732 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:48.395888 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:48.400656 systemd-logind[1494]: New session 3 of user core. Nov 23 23:04:48.409776 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:04:48.460175 sshd[1674]: Connection closed by 10.0.0.1 port 48732 Nov 23 23:04:48.460624 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:48.477805 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:48732.service: Deactivated successfully. Nov 23 23:04:48.480426 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:04:48.482629 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:04:48.486926 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:48740.service - OpenSSH per-connection server daemon (10.0.0.1:48740). Nov 23 23:04:48.488589 systemd-logind[1494]: Removed session 3. Nov 23 23:04:48.552853 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 48740 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:48.554445 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:48.560414 systemd-logind[1494]: New session 4 of user core. Nov 23 23:04:48.575754 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:04:48.629793 sshd[1683]: Connection closed by 10.0.0.1 port 48740 Nov 23 23:04:48.630104 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:48.648895 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:48740.service: Deactivated successfully. Nov 23 23:04:48.652052 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:04:48.652945 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:04:48.655526 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:48750.service - OpenSSH per-connection server daemon (10.0.0.1:48750). Nov 23 23:04:48.656276 systemd-logind[1494]: Removed session 4. Nov 23 23:04:48.713970 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 48750 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:48.715342 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:48.719454 systemd-logind[1494]: New session 5 of user core. Nov 23 23:04:48.725708 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:04:48.786064 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:04:48.786332 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:04:48.806574 sudo[1694]: pam_unix(sudo:session): session closed for user root Nov 23 23:04:48.809316 sshd[1693]: Connection closed by 10.0.0.1 port 48750 Nov 23 23:04:48.810196 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:48.819652 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:48750.service: Deactivated successfully. Nov 23 23:04:48.821668 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:04:48.823227 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:04:48.825829 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:48766.service - OpenSSH per-connection server daemon (10.0.0.1:48766). Nov 23 23:04:48.826534 systemd-logind[1494]: Removed session 5. Nov 23 23:04:48.904474 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:48.905935 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:48.910613 systemd-logind[1494]: New session 6 of user core. Nov 23 23:04:48.920746 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:04:48.974066 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:04:48.974721 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:04:49.049589 sudo[1705]: pam_unix(sudo:session): session closed for user root Nov 23 23:04:49.055014 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:04:49.055294 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:04:49.067048 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:04:49.116054 augenrules[1727]: No rules Nov 23 23:04:49.117582 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:04:49.117953 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:04:49.121652 sudo[1704]: pam_unix(sudo:session): session closed for user root Nov 23 23:04:49.123168 sshd[1703]: Connection closed by 10.0.0.1 port 48766 Nov 23 23:04:49.124026 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:49.138050 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:48766.service: Deactivated successfully. Nov 23 23:04:49.141125 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:04:49.141926 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:04:49.144837 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:48778.service - OpenSSH per-connection server daemon (10.0.0.1:48778). Nov 23 23:04:49.145905 systemd-logind[1494]: Removed session 6. Nov 23 23:04:49.203051 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 48778 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:04:49.204403 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:49.208673 systemd-logind[1494]: New session 7 of user core. Nov 23 23:04:49.219757 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:04:49.272349 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:04:49.272951 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:04:49.567923 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:04:49.584880 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:04:49.785018 dockerd[1761]: time="2025-11-23T23:04:49.784949694Z" level=info msg="Starting up" Nov 23 23:04:49.785865 dockerd[1761]: time="2025-11-23T23:04:49.785838400Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:04:49.796725 dockerd[1761]: time="2025-11-23T23:04:49.796688036Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:04:49.966199 dockerd[1761]: time="2025-11-23T23:04:49.966125542Z" level=info msg="Loading containers: start." Nov 23 23:04:49.979481 kernel: Initializing XFRM netlink socket Nov 23 23:04:50.201938 systemd-networkd[1440]: docker0: Link UP Nov 23 23:04:50.205525 dockerd[1761]: time="2025-11-23T23:04:50.205476698Z" level=info msg="Loading containers: done." Nov 23 23:04:50.219784 dockerd[1761]: time="2025-11-23T23:04:50.219689333Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:04:50.219784 dockerd[1761]: time="2025-11-23T23:04:50.219763665Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:04:50.219918 dockerd[1761]: time="2025-11-23T23:04:50.219850351Z" level=info msg="Initializing buildkit" Nov 23 23:04:50.241278 dockerd[1761]: time="2025-11-23T23:04:50.241243777Z" level=info msg="Completed buildkit initialization" Nov 23 23:04:50.247687 dockerd[1761]: time="2025-11-23T23:04:50.247644107Z" level=info msg="Daemon has completed initialization" Nov 23 23:04:50.247895 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:04:50.248138 dockerd[1761]: time="2025-11-23T23:04:50.247798239Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:04:50.800456 containerd[1523]: time="2025-11-23T23:04:50.800407831Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 23:04:51.317416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001270752.mount: Deactivated successfully. Nov 23 23:04:52.367438 containerd[1523]: time="2025-11-23T23:04:52.367353343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:52.368055 containerd[1523]: time="2025-11-23T23:04:52.368022704Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Nov 23 23:04:52.368980 containerd[1523]: time="2025-11-23T23:04:52.368927491Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:52.371674 containerd[1523]: time="2025-11-23T23:04:52.371640761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:52.372747 containerd[1523]: time="2025-11-23T23:04:52.372713363Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.572261729s" Nov 23 23:04:52.372870 containerd[1523]: time="2025-11-23T23:04:52.372853512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 23:04:52.373810 containerd[1523]: time="2025-11-23T23:04:52.373781038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 23:04:53.367764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:04:53.369220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:04:53.393533 containerd[1523]: time="2025-11-23T23:04:53.392511097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:53.394213 containerd[1523]: time="2025-11-23T23:04:53.394164314Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Nov 23 23:04:53.395203 containerd[1523]: time="2025-11-23T23:04:53.395157366Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:53.398607 containerd[1523]: time="2025-11-23T23:04:53.398562110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:53.400866 containerd[1523]: time="2025-11-23T23:04:53.400479071Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.026660445s" Nov 23 23:04:53.400866 containerd[1523]: time="2025-11-23T23:04:53.400532118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 23:04:53.401069 containerd[1523]: time="2025-11-23T23:04:53.401037297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 23:04:53.513623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:04:53.518222 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:04:53.575598 kubelet[2051]: E1123 23:04:53.575541 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:04:53.578546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:04:53.578683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:04:53.580604 systemd[1]: kubelet.service: Consumed 162ms CPU time, 107.9M memory peak. Nov 23 23:04:54.821480 containerd[1523]: time="2025-11-23T23:04:54.821412837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:54.822016 containerd[1523]: time="2025-11-23T23:04:54.821981476Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Nov 23 23:04:54.823242 containerd[1523]: time="2025-11-23T23:04:54.823202532Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:54.827171 containerd[1523]: time="2025-11-23T23:04:54.827119577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:54.829640 containerd[1523]: time="2025-11-23T23:04:54.829592196Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.428518001s" Nov 23 23:04:54.829640 containerd[1523]: time="2025-11-23T23:04:54.829636685Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 23:04:54.830154 containerd[1523]: time="2025-11-23T23:04:54.830112116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 23:04:56.044209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761424070.mount: Deactivated successfully. Nov 23 23:04:56.337860 containerd[1523]: time="2025-11-23T23:04:56.337617690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:56.339101 containerd[1523]: time="2025-11-23T23:04:56.338641327Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Nov 23 23:04:56.340620 containerd[1523]: time="2025-11-23T23:04:56.340588596Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:56.343086 containerd[1523]: time="2025-11-23T23:04:56.343045028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:56.343641 containerd[1523]: time="2025-11-23T23:04:56.343603068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.513447004s" Nov 23 23:04:56.343641 containerd[1523]: time="2025-11-23T23:04:56.343639917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 23:04:56.344174 containerd[1523]: time="2025-11-23T23:04:56.344100606Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 23:04:56.867018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647687545.mount: Deactivated successfully. Nov 23 23:04:57.917797 containerd[1523]: time="2025-11-23T23:04:57.917731773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:57.925606 containerd[1523]: time="2025-11-23T23:04:57.925535955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Nov 23 23:04:57.931434 containerd[1523]: time="2025-11-23T23:04:57.931369845Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:57.934891 containerd[1523]: time="2025-11-23T23:04:57.934820879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:04:57.935914 containerd[1523]: time="2025-11-23T23:04:57.935889700Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.591753982s" Nov 23 23:04:57.935974 containerd[1523]: time="2025-11-23T23:04:57.935919487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 23:04:57.936508 containerd[1523]: time="2025-11-23T23:04:57.936469897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:04:58.390399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048774781.mount: Deactivated successfully. Nov 23 23:04:58.396648 containerd[1523]: time="2025-11-23T23:04:58.396590220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:04:58.397399 containerd[1523]: time="2025-11-23T23:04:58.397357914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 23 23:04:58.398403 containerd[1523]: time="2025-11-23T23:04:58.398362596Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:04:58.400263 containerd[1523]: time="2025-11-23T23:04:58.400226649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:04:58.401451 containerd[1523]: time="2025-11-23T23:04:58.401427101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 464.926578ms" Nov 23 23:04:58.401518 containerd[1523]: time="2025-11-23T23:04:58.401457433Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:04:58.402253 containerd[1523]: time="2025-11-23T23:04:58.402014452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 23:04:58.953758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139117770.mount: Deactivated successfully. Nov 23 23:05:00.592906 containerd[1523]: time="2025-11-23T23:05:00.592841792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:00.593378 containerd[1523]: time="2025-11-23T23:05:00.593348839Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Nov 23 23:05:00.594512 containerd[1523]: time="2025-11-23T23:05:00.594367910Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:00.597184 containerd[1523]: time="2025-11-23T23:05:00.597149245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:00.599285 containerd[1523]: time="2025-11-23T23:05:00.599153756Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.197106933s" Nov 23 23:05:00.599285 containerd[1523]: time="2025-11-23T23:05:00.599195896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 23:05:03.618131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:05:03.621065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:03.771512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:03.781810 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:05:03.818754 kubelet[2212]: E1123 23:05:03.818688 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:05:03.821205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:05:03.821448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:05:03.823594 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.2M memory peak. Nov 23 23:05:06.478027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:06.478577 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.2M memory peak. Nov 23 23:05:06.480441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:06.505777 systemd[1]: Reload requested from client PID 2229 ('systemctl') (unit session-7.scope)... Nov 23 23:05:06.505792 systemd[1]: Reloading... Nov 23 23:05:06.575540 zram_generator::config[2271]: No configuration found. Nov 23 23:05:07.036656 systemd[1]: Reloading finished in 530 ms. Nov 23 23:05:07.091955 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 23:05:07.092036 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 23:05:07.092289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:07.092341 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95M memory peak. Nov 23 23:05:07.094768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:07.214293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:07.219875 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:05:07.266039 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:07.266039 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:05:07.266039 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:07.267349 kubelet[2316]: I1123 23:05:07.267286 2316 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:05:08.229350 kubelet[2316]: I1123 23:05:08.229295 2316 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:05:08.229350 kubelet[2316]: I1123 23:05:08.229334 2316 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:05:08.229661 kubelet[2316]: I1123 23:05:08.229634 2316 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:05:08.256316 kubelet[2316]: E1123 23:05:08.256243 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:08.257731 kubelet[2316]: I1123 23:05:08.257699 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:05:08.263821 kubelet[2316]: I1123 23:05:08.263795 2316 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:05:08.269203 kubelet[2316]: I1123 23:05:08.269128 2316 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:05:08.269788 kubelet[2316]: I1123 23:05:08.269752 2316 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:05:08.269960 kubelet[2316]: I1123 23:05:08.269787 2316 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:05:08.270060 kubelet[2316]: I1123 23:05:08.270033 2316 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:05:08.270060 kubelet[2316]: I1123 23:05:08.270042 2316 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:05:08.270260 kubelet[2316]: I1123 23:05:08.270241 2316 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:08.273097 kubelet[2316]: I1123 23:05:08.273075 2316 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:05:08.273097 kubelet[2316]: I1123 23:05:08.273100 2316 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:05:08.274008 kubelet[2316]: I1123 23:05:08.273988 2316 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:05:08.274008 kubelet[2316]: I1123 23:05:08.274010 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:05:08.275364 kubelet[2316]: W1123 23:05:08.275307 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Nov 23 23:05:08.275421 kubelet[2316]: E1123 23:05:08.275367 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:08.276031 kubelet[2316]: W1123 23:05:08.275992 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Nov 23 23:05:08.276425 kubelet[2316]: E1123 23:05:08.276398 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:08.277179 kubelet[2316]: I1123 23:05:08.277154 2316 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:05:08.278122 kubelet[2316]: I1123 23:05:08.278097 2316 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:05:08.278359 kubelet[2316]: W1123 23:05:08.278344 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:05:08.279767 kubelet[2316]: I1123 23:05:08.279740 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:05:08.279847 kubelet[2316]: I1123 23:05:08.279785 2316 server.go:1287] "Started kubelet" Nov 23 23:05:08.281128 kubelet[2316]: I1123 23:05:08.281089 2316 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:05:08.282521 kubelet[2316]: I1123 23:05:08.282418 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:05:08.283637 kubelet[2316]: I1123 23:05:08.282964 2316 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:05:08.283637 kubelet[2316]: E1123 23:05:08.283377 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac545f26f3996 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:05:08.279761302 +0000 UTC m=+1.056859912,LastTimestamp:2025-11-23 23:05:08.279761302 +0000 UTC m=+1.056859912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:05:08.284368 kubelet[2316]: I1123 23:05:08.284282 2316 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:05:08.284972 kubelet[2316]: I1123 23:05:08.284948 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:05:08.285178 kubelet[2316]: I1123 23:05:08.285150 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:05:08.285253 kubelet[2316]: I1123 23:05:08.285229 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:05:08.285733 kubelet[2316]: E1123 23:05:08.285709 2316 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:05:08.286295 kubelet[2316]: I1123 23:05:08.286279 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:05:08.286352 kubelet[2316]: I1123 23:05:08.286336 2316 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:05:08.286966 kubelet[2316]: W1123 23:05:08.286894 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Nov 23 23:05:08.286966 kubelet[2316]: E1123 23:05:08.286947 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:08.287102 kubelet[2316]: E1123 23:05:08.287080 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:08.287295 kubelet[2316]: E1123 23:05:08.287269 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Nov 23 23:05:08.288066 kubelet[2316]: I1123 23:05:08.288049 2316 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:05:08.288166 kubelet[2316]: I1123 23:05:08.288149 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:05:08.289304 kubelet[2316]: I1123 23:05:08.289285 2316 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:05:08.301233 kubelet[2316]: I1123 23:05:08.301045 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:05:08.302333 kubelet[2316]: I1123 23:05:08.302188 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:05:08.302333 kubelet[2316]: I1123 23:05:08.302212 2316 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:05:08.302333 kubelet[2316]: I1123 23:05:08.302235 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:05:08.302333 kubelet[2316]: I1123 23:05:08.302242 2316 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:05:08.302333 kubelet[2316]: E1123 23:05:08.302286 2316 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:05:08.305144 kubelet[2316]: I1123 23:05:08.305113 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:05:08.305144 kubelet[2316]: I1123 23:05:08.305134 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:05:08.305144 kubelet[2316]: I1123 23:05:08.305152 2316 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:08.305417 kubelet[2316]: W1123 23:05:08.305377 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Nov 23 23:05:08.305462 kubelet[2316]: E1123 23:05:08.305429 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:08.387586 kubelet[2316]: E1123 23:05:08.387532 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:08.402769 kubelet[2316]: E1123 23:05:08.402719 2316 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 23:05:08.447356 kubelet[2316]: I1123 23:05:08.447301 2316 policy_none.go:49] "None policy: Start" Nov 23 23:05:08.447356 kubelet[2316]: I1123 23:05:08.447335 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:05:08.447356 kubelet[2316]: I1123 23:05:08.447355 2316 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:05:08.453165 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:05:08.473572 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:05:08.476522 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:05:08.487848 kubelet[2316]: E1123 23:05:08.487673 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:08.488100 kubelet[2316]: E1123 23:05:08.488059 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Nov 23 23:05:08.490433 kubelet[2316]: I1123 23:05:08.490411 2316 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:05:08.490670 kubelet[2316]: I1123 23:05:08.490649 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:05:08.490707 kubelet[2316]: I1123 23:05:08.490670 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:05:08.490941 kubelet[2316]: I1123 23:05:08.490916 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:05:08.492053 kubelet[2316]: E1123 23:05:08.492022 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:05:08.492205 kubelet[2316]: E1123 23:05:08.492068 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 23 23:05:08.595146 kubelet[2316]: I1123 23:05:08.593102 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:08.595146 kubelet[2316]: E1123 23:05:08.593827 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 23 23:05:08.614285 systemd[1]: Created slice kubepods-burstable-pod7694bb2d8b25cccd30af6fe17aa695d7.slice - libcontainer container kubepods-burstable-pod7694bb2d8b25cccd30af6fe17aa695d7.slice. Nov 23 23:05:08.634646 kubelet[2316]: E1123 23:05:08.634614 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:08.637478 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Nov 23 23:05:08.640018 kubelet[2316]: E1123 23:05:08.639824 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:08.655752 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Nov 23 23:05:08.657895 kubelet[2316]: E1123 23:05:08.657785 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:08.688162 kubelet[2316]: I1123 23:05:08.688102 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:08.688162 kubelet[2316]: I1123 23:05:08.688166 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7694bb2d8b25cccd30af6fe17aa695d7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7694bb2d8b25cccd30af6fe17aa695d7\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:08.688317 kubelet[2316]: I1123 23:05:08.688187 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7694bb2d8b25cccd30af6fe17aa695d7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7694bb2d8b25cccd30af6fe17aa695d7\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:08.688317 kubelet[2316]: I1123 23:05:08.688232 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:08.688317 kubelet[2316]: I1123 23:05:08.688265 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:08.688317 kubelet[2316]: I1123 23:05:08.688286 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:08.688317 kubelet[2316]: I1123 23:05:08.688303 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:08.688410 kubelet[2316]: I1123 23:05:08.688322 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:08.688410 kubelet[2316]: I1123 23:05:08.688337 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7694bb2d8b25cccd30af6fe17aa695d7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7694bb2d8b25cccd30af6fe17aa695d7\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:08.795078 kubelet[2316]: I1123 23:05:08.794978 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:08.795379 kubelet[2316]: E1123 23:05:08.795345 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 23 23:05:08.889535 kubelet[2316]: E1123 23:05:08.889478 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Nov 23 23:05:08.935839 kubelet[2316]: E1123 23:05:08.935811 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:08.936516 containerd[1523]: time="2025-11-23T23:05:08.936396786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7694bb2d8b25cccd30af6fe17aa695d7,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:08.940599 kubelet[2316]: E1123 23:05:08.940568 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:08.941152 containerd[1523]: time="2025-11-23T23:05:08.941119798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:08.958480 kubelet[2316]: E1123 23:05:08.958441 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:08.959119 containerd[1523]: time="2025-11-23T23:05:08.958955997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:09.025820 containerd[1523]: time="2025-11-23T23:05:09.025692410Z" level=info msg="connecting to shim 427fb49d1eb0c5a3cb7c2353e5c75aaaabc293ef5557a0213388a30b15b521ea" address="unix:///run/containerd/s/2ee91b01d46a953088b1b79e22a7ee373af0d52aa64b0ed0dc095506e05fd505" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:09.030690 containerd[1523]: time="2025-11-23T23:05:09.030646898Z" level=info msg="connecting to shim 22d76cb45817d86192ccdd086eaeda61967401e87ca761852aa8401d1602c574" address="unix:///run/containerd/s/64f39b841ea27704465940e490f9b1caea31fb3d91088d851885c142f3097159" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:09.039052 containerd[1523]: time="2025-11-23T23:05:09.038998193Z" level=info msg="connecting to shim 5f272b121cc42d1a14087f26f3b11ea7d15f6d882cecd423e2cc300d31c0d98d" address="unix:///run/containerd/s/51252d901408a8f0748e0627466ba124499d6416bd3ee20cc39079b88003cbc6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:09.058849 systemd[1]: Started cri-containerd-427fb49d1eb0c5a3cb7c2353e5c75aaaabc293ef5557a0213388a30b15b521ea.scope - libcontainer container 427fb49d1eb0c5a3cb7c2353e5c75aaaabc293ef5557a0213388a30b15b521ea. Nov 23 23:05:09.063861 systemd[1]: Started cri-containerd-22d76cb45817d86192ccdd086eaeda61967401e87ca761852aa8401d1602c574.scope - libcontainer container 22d76cb45817d86192ccdd086eaeda61967401e87ca761852aa8401d1602c574. Nov 23 23:05:09.065442 systemd[1]: Started cri-containerd-5f272b121cc42d1a14087f26f3b11ea7d15f6d882cecd423e2cc300d31c0d98d.scope - libcontainer container 5f272b121cc42d1a14087f26f3b11ea7d15f6d882cecd423e2cc300d31c0d98d. Nov 23 23:05:09.096387 kubelet[2316]: W1123 23:05:09.096325 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Nov 23 23:05:09.096739 kubelet[2316]: E1123 23:05:09.096393 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:09.108854 containerd[1523]: time="2025-11-23T23:05:09.108782734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f272b121cc42d1a14087f26f3b11ea7d15f6d882cecd423e2cc300d31c0d98d\"" Nov 23 23:05:09.110129 kubelet[2316]: E1123 23:05:09.110060 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:09.114478 containerd[1523]: time="2025-11-23T23:05:09.114434281Z" level=info msg="CreateContainer within sandbox \"5f272b121cc42d1a14087f26f3b11ea7d15f6d882cecd423e2cc300d31c0d98d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:05:09.116831 containerd[1523]: time="2025-11-23T23:05:09.116779954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7694bb2d8b25cccd30af6fe17aa695d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"427fb49d1eb0c5a3cb7c2353e5c75aaaabc293ef5557a0213388a30b15b521ea\"" Nov 23 23:05:09.117650 kubelet[2316]: E1123 23:05:09.117616 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:09.119667 containerd[1523]: time="2025-11-23T23:05:09.119611473Z" level=info msg="CreateContainer within sandbox \"427fb49d1eb0c5a3cb7c2353e5c75aaaabc293ef5557a0213388a30b15b521ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:05:09.123151 containerd[1523]: time="2025-11-23T23:05:09.123106858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"22d76cb45817d86192ccdd086eaeda61967401e87ca761852aa8401d1602c574\"" Nov 23 23:05:09.123972 kubelet[2316]: E1123 23:05:09.123947 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:09.125961 containerd[1523]: time="2025-11-23T23:05:09.125916836Z" level=info msg="CreateContainer within sandbox \"22d76cb45817d86192ccdd086eaeda61967401e87ca761852aa8401d1602c574\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:05:09.197514 kubelet[2316]: I1123 23:05:09.197449 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:09.197915 kubelet[2316]: E1123 23:05:09.197869 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 23 23:05:09.239853 containerd[1523]: time="2025-11-23T23:05:09.239793073Z" level=info msg="Container 106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:09.309194 containerd[1523]: time="2025-11-23T23:05:09.308358431Z" level=info msg="Container 8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:09.318178 containerd[1523]: time="2025-11-23T23:05:09.318037097Z" level=info msg="CreateContainer within sandbox \"5f272b121cc42d1a14087f26f3b11ea7d15f6d882cecd423e2cc300d31c0d98d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a\"" Nov 23 23:05:09.319042 containerd[1523]: time="2025-11-23T23:05:09.319014357Z" level=info msg="StartContainer for \"106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a\"" Nov 23 23:05:09.320376 containerd[1523]: time="2025-11-23T23:05:09.320344931Z" level=info msg="connecting to shim 106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a" address="unix:///run/containerd/s/51252d901408a8f0748e0627466ba124499d6416bd3ee20cc39079b88003cbc6" protocol=ttrpc version=3 Nov 23 23:05:09.321659 containerd[1523]: time="2025-11-23T23:05:09.321615686Z" level=info msg="Container 5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:09.327043 containerd[1523]: time="2025-11-23T23:05:09.326984349Z" level=info msg="CreateContainer within sandbox \"427fb49d1eb0c5a3cb7c2353e5c75aaaabc293ef5557a0213388a30b15b521ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914\"" Nov 23 23:05:09.328515 containerd[1523]: time="2025-11-23T23:05:09.327886494Z" level=info msg="StartContainer for \"8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914\"" Nov 23 23:05:09.329021 containerd[1523]: time="2025-11-23T23:05:09.328979951Z" level=info msg="connecting to shim 8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914" address="unix:///run/containerd/s/2ee91b01d46a953088b1b79e22a7ee373af0d52aa64b0ed0dc095506e05fd505" protocol=ttrpc version=3 Nov 23 23:05:09.331976 containerd[1523]: time="2025-11-23T23:05:09.331922301Z" level=info msg="CreateContainer within sandbox \"22d76cb45817d86192ccdd086eaeda61967401e87ca761852aa8401d1602c574\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb\"" Nov 23 23:05:09.332842 containerd[1523]: time="2025-11-23T23:05:09.332802144Z" level=info msg="StartContainer for \"5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb\"" Nov 23 23:05:09.334029 containerd[1523]: time="2025-11-23T23:05:09.333995060Z" level=info msg="connecting to shim 5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb" address="unix:///run/containerd/s/64f39b841ea27704465940e490f9b1caea31fb3d91088d851885c142f3097159" protocol=ttrpc version=3 Nov 23 23:05:09.344889 systemd[1]: Started cri-containerd-106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a.scope - libcontainer container 106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a. Nov 23 23:05:09.349909 systemd[1]: Started cri-containerd-8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914.scope - libcontainer container 8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914. Nov 23 23:05:09.377704 systemd[1]: Started cri-containerd-5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb.scope - libcontainer container 5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb. Nov 23 23:05:09.400821 containerd[1523]: time="2025-11-23T23:05:09.400766339Z" level=info msg="StartContainer for \"106ef3aa75057be6719e5ce7afb45fc0726c77192d771f38d3c99fee70f3606a\" returns successfully" Nov 23 23:05:09.418179 containerd[1523]: time="2025-11-23T23:05:09.418113375Z" level=info msg="StartContainer for \"8d0a8bc265fc93a7dbafa1cdd3180a464bb014ecfb3e81e7d36b9fae6b671914\" returns successfully" Nov 23 23:05:09.437360 containerd[1523]: time="2025-11-23T23:05:09.437235631Z" level=info msg="StartContainer for \"5e3b64dd58c12a4ebcd621af686c6667dbd8e70c8ac72485c417fca56f5f90bb\" returns successfully" Nov 23 23:05:10.001865 kubelet[2316]: I1123 23:05:10.001831 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:10.319166 kubelet[2316]: E1123 23:05:10.318228 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:10.319166 kubelet[2316]: E1123 23:05:10.319021 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:10.320086 kubelet[2316]: E1123 23:05:10.320048 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:10.321174 kubelet[2316]: E1123 23:05:10.321091 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:10.322134 kubelet[2316]: E1123 23:05:10.321960 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:10.322134 kubelet[2316]: E1123 23:05:10.322062 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:11.324174 kubelet[2316]: E1123 23:05:11.324133 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:11.324482 kubelet[2316]: E1123 23:05:11.324309 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:11.324482 kubelet[2316]: E1123 23:05:11.324429 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:11.324985 kubelet[2316]: E1123 23:05:11.324808 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:11.325104 kubelet[2316]: E1123 23:05:11.325027 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:11.325623 kubelet[2316]: E1123 23:05:11.325590 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:11.670550 kubelet[2316]: E1123 23:05:11.669286 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 23 23:05:11.717531 kubelet[2316]: E1123 23:05:11.715401 2316 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.187ac545f26f3996 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:05:08.279761302 +0000 UTC m=+1.056859912,LastTimestamp:2025-11-23 23:05:08.279761302 +0000 UTC m=+1.056859912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:05:11.753972 kubelet[2316]: I1123 23:05:11.753915 2316 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:05:11.788100 kubelet[2316]: I1123 23:05:11.788011 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:11.793896 kubelet[2316]: E1123 23:05:11.793851 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:11.793896 kubelet[2316]: I1123 23:05:11.793887 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:11.795696 kubelet[2316]: E1123 23:05:11.795660 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:11.795696 kubelet[2316]: I1123 23:05:11.795685 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:11.799001 kubelet[2316]: E1123 23:05:11.798963 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:12.277839 kubelet[2316]: I1123 23:05:12.277781 2316 apiserver.go:52] "Watching apiserver" Nov 23 23:05:12.286610 kubelet[2316]: I1123 23:05:12.286574 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:05:12.324535 kubelet[2316]: I1123 23:05:12.324422 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:12.325666 kubelet[2316]: I1123 23:05:12.324552 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:12.327026 kubelet[2316]: E1123 23:05:12.326571 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:12.327026 kubelet[2316]: E1123 23:05:12.326730 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:12.327210 kubelet[2316]: E1123 23:05:12.327181 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:12.327385 kubelet[2316]: E1123 23:05:12.327355 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:13.326570 kubelet[2316]: I1123 23:05:13.326526 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:13.333306 kubelet[2316]: E1123 23:05:13.333250 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:13.657673 systemd[1]: Reload requested from client PID 2597 ('systemctl') (unit session-7.scope)... Nov 23 23:05:13.657690 systemd[1]: Reloading... Nov 23 23:05:13.727537 zram_generator::config[2639]: No configuration found. Nov 23 23:05:13.948766 systemd[1]: Reloading finished in 290 ms. Nov 23 23:05:13.971587 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:13.983580 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:05:13.983816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:13.983882 systemd[1]: kubelet.service: Consumed 1.465s CPU time, 128.4M memory peak. Nov 23 23:05:13.985661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:14.146482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:14.151369 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:05:14.204955 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:14.205282 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:05:14.205282 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:14.205633 kubelet[2682]: I1123 23:05:14.205580 2682 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:05:14.211718 kubelet[2682]: I1123 23:05:14.211673 2682 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:05:14.211718 kubelet[2682]: I1123 23:05:14.211704 2682 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:05:14.212049 kubelet[2682]: I1123 23:05:14.212030 2682 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:05:14.213689 kubelet[2682]: I1123 23:05:14.213668 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 23:05:14.216092 kubelet[2682]: I1123 23:05:14.216011 2682 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:05:14.222983 kubelet[2682]: I1123 23:05:14.222945 2682 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:05:14.227547 kubelet[2682]: I1123 23:05:14.227211 2682 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:05:14.227547 kubelet[2682]: I1123 23:05:14.227455 2682 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:05:14.227723 kubelet[2682]: I1123 23:05:14.227487 2682 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:05:14.227723 kubelet[2682]: I1123 23:05:14.227723 2682 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:05:14.227836 kubelet[2682]: I1123 23:05:14.227732 2682 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:05:14.227836 kubelet[2682]: I1123 23:05:14.227780 2682 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:14.227945 kubelet[2682]: I1123 23:05:14.227917 2682 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:05:14.227945 kubelet[2682]: I1123 23:05:14.227934 2682 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:05:14.227995 kubelet[2682]: I1123 23:05:14.227959 2682 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:05:14.227995 kubelet[2682]: I1123 23:05:14.227970 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:05:14.229777 kubelet[2682]: I1123 23:05:14.229739 2682 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:05:14.231484 kubelet[2682]: I1123 23:05:14.231459 2682 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:05:14.232199 kubelet[2682]: I1123 23:05:14.232167 2682 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:05:14.232372 kubelet[2682]: I1123 23:05:14.232328 2682 server.go:1287] "Started kubelet" Nov 23 23:05:14.233095 kubelet[2682]: I1123 23:05:14.233048 2682 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:05:14.233256 kubelet[2682]: I1123 23:05:14.233209 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:05:14.233575 kubelet[2682]: I1123 23:05:14.233554 2682 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:05:14.234747 kubelet[2682]: I1123 23:05:14.233937 2682 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:05:14.235644 kubelet[2682]: I1123 23:05:14.234154 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:05:14.235961 kubelet[2682]: I1123 23:05:14.234266 2682 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:05:14.236141 kubelet[2682]: I1123 23:05:14.236118 2682 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:05:14.236141 kubelet[2682]: E1123 23:05:14.236131 2682 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:14.236465 kubelet[2682]: I1123 23:05:14.236440 2682 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:05:14.236594 kubelet[2682]: I1123 23:05:14.236577 2682 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:05:14.237484 kubelet[2682]: E1123 23:05:14.237453 2682 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:05:14.237935 kubelet[2682]: I1123 23:05:14.237911 2682 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:05:14.238124 kubelet[2682]: I1123 23:05:14.238094 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:05:14.244774 kubelet[2682]: I1123 23:05:14.243810 2682 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:05:14.255038 kubelet[2682]: I1123 23:05:14.254983 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:05:14.256691 kubelet[2682]: I1123 23:05:14.256663 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:05:14.256818 kubelet[2682]: I1123 23:05:14.256807 2682 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:05:14.256884 kubelet[2682]: I1123 23:05:14.256875 2682 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:05:14.256933 kubelet[2682]: I1123 23:05:14.256925 2682 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:05:14.257470 kubelet[2682]: E1123 23:05:14.257440 2682 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:05:14.293774 kubelet[2682]: I1123 23:05:14.293744 2682 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:05:14.293774 kubelet[2682]: I1123 23:05:14.293765 2682 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:05:14.293774 kubelet[2682]: I1123 23:05:14.293786 2682 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:14.294028 kubelet[2682]: I1123 23:05:14.294000 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:05:14.294063 kubelet[2682]: I1123 23:05:14.294025 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:05:14.294063 kubelet[2682]: I1123 23:05:14.294060 2682 policy_none.go:49] "None policy: Start" Nov 23 23:05:14.294128 kubelet[2682]: I1123 23:05:14.294068 2682 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:05:14.294128 kubelet[2682]: I1123 23:05:14.294081 2682 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:05:14.294203 kubelet[2682]: I1123 23:05:14.294190 2682 state_mem.go:75] "Updated machine memory state" Nov 23 23:05:14.298378 kubelet[2682]: I1123 23:05:14.298193 2682 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:05:14.298378 kubelet[2682]: I1123 23:05:14.298370 2682 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:05:14.298517 kubelet[2682]: I1123 23:05:14.298382 2682 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:05:14.298708 kubelet[2682]: I1123 23:05:14.298691 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:05:14.299536 kubelet[2682]: E1123 23:05:14.299516 2682 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:05:14.358327 kubelet[2682]: I1123 23:05:14.358283 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:14.358327 kubelet[2682]: I1123 23:05:14.358316 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:14.358639 kubelet[2682]: I1123 23:05:14.358613 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:14.365078 kubelet[2682]: E1123 23:05:14.365039 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:14.402971 kubelet[2682]: I1123 23:05:14.402930 2682 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:14.410216 kubelet[2682]: I1123 23:05:14.410181 2682 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 23 23:05:14.410352 kubelet[2682]: I1123 23:05:14.410286 2682 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:05:14.538539 kubelet[2682]: I1123 23:05:14.538308 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7694bb2d8b25cccd30af6fe17aa695d7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7694bb2d8b25cccd30af6fe17aa695d7\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:14.538539 kubelet[2682]: I1123 23:05:14.538348 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7694bb2d8b25cccd30af6fe17aa695d7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7694bb2d8b25cccd30af6fe17aa695d7\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:14.538539 kubelet[2682]: I1123 23:05:14.538370 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:14.538539 kubelet[2682]: I1123 23:05:14.538389 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:14.538539 kubelet[2682]: I1123 23:05:14.538407 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:14.539086 kubelet[2682]: I1123 23:05:14.538448 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:14.539086 kubelet[2682]: I1123 23:05:14.538513 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7694bb2d8b25cccd30af6fe17aa695d7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7694bb2d8b25cccd30af6fe17aa695d7\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:14.539086 kubelet[2682]: I1123 23:05:14.539048 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:14.539086 kubelet[2682]: I1123 23:05:14.539071 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:14.665742 kubelet[2682]: E1123 23:05:14.665688 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:14.665742 kubelet[2682]: E1123 23:05:14.665694 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:14.665938 kubelet[2682]: E1123 23:05:14.665702 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:15.229230 kubelet[2682]: I1123 23:05:15.229094 2682 apiserver.go:52] "Watching apiserver" Nov 23 23:05:15.237102 kubelet[2682]: I1123 23:05:15.237038 2682 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:05:15.279141 kubelet[2682]: I1123 23:05:15.278931 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:15.279250 kubelet[2682]: E1123 23:05:15.279199 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:15.279451 kubelet[2682]: E1123 23:05:15.279431 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:15.284922 kubelet[2682]: E1123 23:05:15.284795 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:15.285063 kubelet[2682]: E1123 23:05:15.285046 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:15.312547 kubelet[2682]: I1123 23:05:15.312170 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3121516899999999 podStartE2EDuration="1.31215169s" podCreationTimestamp="2025-11-23 23:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:15.301632394 +0000 UTC m=+1.146338340" watchObservedRunningTime="2025-11-23 23:05:15.31215169 +0000 UTC m=+1.156857636" Nov 23 23:05:15.321633 kubelet[2682]: I1123 23:05:15.321565 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.321545346 podStartE2EDuration="2.321545346s" podCreationTimestamp="2025-11-23 23:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:15.312245123 +0000 UTC m=+1.156951069" watchObservedRunningTime="2025-11-23 23:05:15.321545346 +0000 UTC m=+1.166251292" Nov 23 23:05:15.322147 kubelet[2682]: I1123 23:05:15.321658 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.321654351 podStartE2EDuration="1.321654351s" podCreationTimestamp="2025-11-23 23:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:15.321522528 +0000 UTC m=+1.166228474" watchObservedRunningTime="2025-11-23 23:05:15.321654351 +0000 UTC m=+1.166360297" Nov 23 23:05:16.284703 kubelet[2682]: E1123 23:05:16.284668 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:16.290491 kubelet[2682]: E1123 23:05:16.290441 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:17.346906 kubelet[2682]: E1123 23:05:17.346860 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:19.265018 kubelet[2682]: E1123 23:05:19.264983 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:19.285993 kubelet[2682]: E1123 23:05:19.285949 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:19.343589 kubelet[2682]: I1123 23:05:19.343549 2682 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:05:19.343878 containerd[1523]: time="2025-11-23T23:05:19.343843679Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:05:19.344233 kubelet[2682]: I1123 23:05:19.344126 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:05:20.291385 kubelet[2682]: E1123 23:05:20.291341 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:20.337113 systemd[1]: Created slice kubepods-besteffort-pod6beaa6e8_7a98_4f67_b5c1_ffd9903438e4.slice - libcontainer container kubepods-besteffort-pod6beaa6e8_7a98_4f67_b5c1_ffd9903438e4.slice. Nov 23 23:05:20.374856 kubelet[2682]: I1123 23:05:20.374726 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6beaa6e8-7a98-4f67-b5c1-ffd9903438e4-kube-proxy\") pod \"kube-proxy-srt6g\" (UID: \"6beaa6e8-7a98-4f67-b5c1-ffd9903438e4\") " pod="kube-system/kube-proxy-srt6g" Nov 23 23:05:20.374856 kubelet[2682]: I1123 23:05:20.374770 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6beaa6e8-7a98-4f67-b5c1-ffd9903438e4-xtables-lock\") pod \"kube-proxy-srt6g\" (UID: \"6beaa6e8-7a98-4f67-b5c1-ffd9903438e4\") " pod="kube-system/kube-proxy-srt6g" Nov 23 23:05:20.374856 kubelet[2682]: I1123 23:05:20.374793 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6beaa6e8-7a98-4f67-b5c1-ffd9903438e4-lib-modules\") pod \"kube-proxy-srt6g\" (UID: \"6beaa6e8-7a98-4f67-b5c1-ffd9903438e4\") " pod="kube-system/kube-proxy-srt6g" Nov 23 23:05:20.374856 kubelet[2682]: I1123 23:05:20.374812 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55s9\" (UniqueName: \"kubernetes.io/projected/6beaa6e8-7a98-4f67-b5c1-ffd9903438e4-kube-api-access-b55s9\") pod \"kube-proxy-srt6g\" (UID: \"6beaa6e8-7a98-4f67-b5c1-ffd9903438e4\") " pod="kube-system/kube-proxy-srt6g" Nov 23 23:05:20.506350 systemd[1]: Created slice kubepods-besteffort-pod17022220_d47f_4aac_97d0_b84d5480a340.slice - libcontainer container kubepods-besteffort-pod17022220_d47f_4aac_97d0_b84d5480a340.slice. Nov 23 23:05:20.576250 kubelet[2682]: I1123 23:05:20.576119 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17022220-d47f-4aac-97d0-b84d5480a340-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dbt57\" (UID: \"17022220-d47f-4aac-97d0-b84d5480a340\") " pod="tigera-operator/tigera-operator-7dcd859c48-dbt57" Nov 23 23:05:20.576250 kubelet[2682]: I1123 23:05:20.576197 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwxc\" (UniqueName: \"kubernetes.io/projected/17022220-d47f-4aac-97d0-b84d5480a340-kube-api-access-6jwxc\") pod \"tigera-operator-7dcd859c48-dbt57\" (UID: \"17022220-d47f-4aac-97d0-b84d5480a340\") " pod="tigera-operator/tigera-operator-7dcd859c48-dbt57" Nov 23 23:05:20.653123 kubelet[2682]: E1123 23:05:20.653085 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:20.654527 containerd[1523]: time="2025-11-23T23:05:20.654456884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srt6g,Uid:6beaa6e8-7a98-4f67-b5c1-ffd9903438e4,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:20.669318 containerd[1523]: time="2025-11-23T23:05:20.669272641Z" level=info msg="connecting to shim e4558d212b77cdc4b9ab4cfeb9c9cb9341c1ee5fb82e4ad711b2e2fe10764184" address="unix:///run/containerd/s/a749aca1d3df276fd668e801ed0b61f39bbaf8eb5521cfb72c3377e6c32bdcaf" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:20.701721 systemd[1]: Started cri-containerd-e4558d212b77cdc4b9ab4cfeb9c9cb9341c1ee5fb82e4ad711b2e2fe10764184.scope - libcontainer container e4558d212b77cdc4b9ab4cfeb9c9cb9341c1ee5fb82e4ad711b2e2fe10764184. Nov 23 23:05:20.722911 containerd[1523]: time="2025-11-23T23:05:20.722874682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srt6g,Uid:6beaa6e8-7a98-4f67-b5c1-ffd9903438e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4558d212b77cdc4b9ab4cfeb9c9cb9341c1ee5fb82e4ad711b2e2fe10764184\"" Nov 23 23:05:20.723564 kubelet[2682]: E1123 23:05:20.723543 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:20.726796 containerd[1523]: time="2025-11-23T23:05:20.726725197Z" level=info msg="CreateContainer within sandbox \"e4558d212b77cdc4b9ab4cfeb9c9cb9341c1ee5fb82e4ad711b2e2fe10764184\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:05:20.736650 containerd[1523]: time="2025-11-23T23:05:20.736601395Z" level=info msg="Container e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:20.743395 containerd[1523]: time="2025-11-23T23:05:20.743343419Z" level=info msg="CreateContainer within sandbox \"e4558d212b77cdc4b9ab4cfeb9c9cb9341c1ee5fb82e4ad711b2e2fe10764184\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c\"" Nov 23 23:05:20.744692 containerd[1523]: time="2025-11-23T23:05:20.744125922Z" level=info msg="StartContainer for \"e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c\"" Nov 23 23:05:20.746622 containerd[1523]: time="2025-11-23T23:05:20.746593660Z" level=info msg="connecting to shim e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c" address="unix:///run/containerd/s/a749aca1d3df276fd668e801ed0b61f39bbaf8eb5521cfb72c3377e6c32bdcaf" protocol=ttrpc version=3 Nov 23 23:05:20.770709 systemd[1]: Started cri-containerd-e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c.scope - libcontainer container e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c. Nov 23 23:05:20.810523 containerd[1523]: time="2025-11-23T23:05:20.810463810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dbt57,Uid:17022220-d47f-4aac-97d0-b84d5480a340,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:05:20.824465 containerd[1523]: time="2025-11-23T23:05:20.824308632Z" level=info msg="StartContainer for \"e753970c6a8a0099b1dcc247f3ff69901176c4bab6a1ec5081c9f850959ca39c\" returns successfully" Nov 23 23:05:20.835289 containerd[1523]: time="2025-11-23T23:05:20.835145277Z" level=info msg="connecting to shim 889c6fc541e56883dee9663dc8f6f0ae6c3bb67237418a55cc43b64555243326" address="unix:///run/containerd/s/ee9f4dd0f2dc7664bb03d4c1feeb9674b0cf2c8a58a90235f67c3e1b06fbe383" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:20.859915 systemd[1]: Started cri-containerd-889c6fc541e56883dee9663dc8f6f0ae6c3bb67237418a55cc43b64555243326.scope - libcontainer container 889c6fc541e56883dee9663dc8f6f0ae6c3bb67237418a55cc43b64555243326. Nov 23 23:05:20.896672 containerd[1523]: time="2025-11-23T23:05:20.896579547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dbt57,Uid:17022220-d47f-4aac-97d0-b84d5480a340,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"889c6fc541e56883dee9663dc8f6f0ae6c3bb67237418a55cc43b64555243326\"" Nov 23 23:05:20.898572 containerd[1523]: time="2025-11-23T23:05:20.898503804Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:05:21.294088 kubelet[2682]: E1123 23:05:21.294063 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:21.788683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095613059.mount: Deactivated successfully. Nov 23 23:05:22.095587 containerd[1523]: time="2025-11-23T23:05:22.095455719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:22.096386 containerd[1523]: time="2025-11-23T23:05:22.096208998Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:05:22.097363 containerd[1523]: time="2025-11-23T23:05:22.097325911Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:22.099427 containerd[1523]: time="2025-11-23T23:05:22.099397090Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:22.100794 containerd[1523]: time="2025-11-23T23:05:22.100769058Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.202230514s" Nov 23 23:05:22.100864 containerd[1523]: time="2025-11-23T23:05:22.100799834Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:05:22.102786 containerd[1523]: time="2025-11-23T23:05:22.102748708Z" level=info msg="CreateContainer within sandbox \"889c6fc541e56883dee9663dc8f6f0ae6c3bb67237418a55cc43b64555243326\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:05:22.109538 containerd[1523]: time="2025-11-23T23:05:22.109198731Z" level=info msg="Container 6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:22.118067 containerd[1523]: time="2025-11-23T23:05:22.118011367Z" level=info msg="CreateContainer within sandbox \"889c6fc541e56883dee9663dc8f6f0ae6c3bb67237418a55cc43b64555243326\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9\"" Nov 23 23:05:22.118656 containerd[1523]: time="2025-11-23T23:05:22.118527601Z" level=info msg="StartContainer for \"6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9\"" Nov 23 23:05:22.119329 containerd[1523]: time="2025-11-23T23:05:22.119306494Z" level=info msg="connecting to shim 6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9" address="unix:///run/containerd/s/ee9f4dd0f2dc7664bb03d4c1feeb9674b0cf2c8a58a90235f67c3e1b06fbe383" protocol=ttrpc version=3 Nov 23 23:05:22.142692 systemd[1]: Started cri-containerd-6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9.scope - libcontainer container 6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9. Nov 23 23:05:22.173471 containerd[1523]: time="2025-11-23T23:05:22.173429853Z" level=info msg="StartContainer for \"6fa6156cb6442efc7aa05b8581cf0106c1bffeb87295ba7560ed8b41e68572c9\" returns successfully" Nov 23 23:05:22.304410 kubelet[2682]: I1123 23:05:22.304360 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-srt6g" podStartSLOduration=2.304343079 podStartE2EDuration="2.304343079s" podCreationTimestamp="2025-11-23 23:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:21.304709632 +0000 UTC m=+7.149415578" watchObservedRunningTime="2025-11-23 23:05:22.304343079 +0000 UTC m=+8.149049025" Nov 23 23:05:22.305406 kubelet[2682]: I1123 23:05:22.304447 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dbt57" podStartSLOduration=1.100953836 podStartE2EDuration="2.304444053s" podCreationTimestamp="2025-11-23 23:05:20 +0000 UTC" firstStartedPulling="2025-11-23 23:05:20.89795512 +0000 UTC m=+6.742661026" lastFinishedPulling="2025-11-23 23:05:22.101445297 +0000 UTC m=+7.946151243" observedRunningTime="2025-11-23 23:05:22.304026471 +0000 UTC m=+8.148732417" watchObservedRunningTime="2025-11-23 23:05:22.304444053 +0000 UTC m=+8.149150039" Nov 23 23:05:23.368100 kubelet[2682]: E1123 23:05:23.368059 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:24.308419 kubelet[2682]: E1123 23:05:24.307840 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:26.438215 update_engine[1506]: I20251123 23:05:26.438148 1506 update_attempter.cc:509] Updating boot flags... Nov 23 23:05:27.356834 kubelet[2682]: E1123 23:05:27.356619 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:27.692172 sudo[1740]: pam_unix(sudo:session): session closed for user root Nov 23 23:05:27.694143 sshd[1739]: Connection closed by 10.0.0.1 port 48778 Nov 23 23:05:27.694611 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:27.699842 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:48778.service: Deactivated successfully. Nov 23 23:05:27.701831 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:05:27.702052 systemd[1]: session-7.scope: Consumed 7.573s CPU time, 225.2M memory peak. Nov 23 23:05:27.711855 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:05:27.717794 systemd-logind[1494]: Removed session 7. Nov 23 23:05:34.893972 systemd[1]: Created slice kubepods-besteffort-pod062fe414_62cc_488f_ac95_d6301a9bd12b.slice - libcontainer container kubepods-besteffort-pod062fe414_62cc_488f_ac95_d6301a9bd12b.slice. Nov 23 23:05:34.975290 kubelet[2682]: I1123 23:05:34.975151 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/062fe414-62cc-488f-ac95-d6301a9bd12b-typha-certs\") pod \"calico-typha-55b47bd55b-g5w7n\" (UID: \"062fe414-62cc-488f-ac95-d6301a9bd12b\") " pod="calico-system/calico-typha-55b47bd55b-g5w7n" Nov 23 23:05:34.975290 kubelet[2682]: I1123 23:05:34.975207 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062fe414-62cc-488f-ac95-d6301a9bd12b-tigera-ca-bundle\") pod \"calico-typha-55b47bd55b-g5w7n\" (UID: \"062fe414-62cc-488f-ac95-d6301a9bd12b\") " pod="calico-system/calico-typha-55b47bd55b-g5w7n" Nov 23 23:05:34.975290 kubelet[2682]: I1123 23:05:34.975235 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqwf\" (UniqueName: \"kubernetes.io/projected/062fe414-62cc-488f-ac95-d6301a9bd12b-kube-api-access-pxqwf\") pod \"calico-typha-55b47bd55b-g5w7n\" (UID: \"062fe414-62cc-488f-ac95-d6301a9bd12b\") " pod="calico-system/calico-typha-55b47bd55b-g5w7n" Nov 23 23:05:35.028980 systemd[1]: Created slice kubepods-besteffort-poda5785041_5c8b_4e5a_907d_02963fe171ef.slice - libcontainer container kubepods-besteffort-poda5785041_5c8b_4e5a_907d_02963fe171ef.slice. Nov 23 23:05:35.076074 kubelet[2682]: I1123 23:05:35.076026 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-xtables-lock\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076074 kubelet[2682]: I1123 23:05:35.076073 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-flexvol-driver-host\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076246 kubelet[2682]: I1123 23:05:35.076095 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5785041-5c8b-4e5a-907d-02963fe171ef-tigera-ca-bundle\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076246 kubelet[2682]: I1123 23:05:35.076112 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-var-run-calico\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076246 kubelet[2682]: I1123 23:05:35.076143 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-policysync\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076246 kubelet[2682]: I1123 23:05:35.076177 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a5785041-5c8b-4e5a-907d-02963fe171ef-node-certs\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076246 kubelet[2682]: I1123 23:05:35.076194 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-lib-modules\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076354 kubelet[2682]: I1123 23:05:35.076213 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-cni-net-dir\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076354 kubelet[2682]: I1123 23:05:35.076241 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-cni-bin-dir\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076354 kubelet[2682]: I1123 23:05:35.076270 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjlh\" (UniqueName: \"kubernetes.io/projected/a5785041-5c8b-4e5a-907d-02963fe171ef-kube-api-access-dtjlh\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076354 kubelet[2682]: I1123 23:05:35.076286 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-var-lib-calico\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.076354 kubelet[2682]: I1123 23:05:35.076302 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a5785041-5c8b-4e5a-907d-02963fe171ef-cni-log-dir\") pod \"calico-node-ckqs8\" (UID: \"a5785041-5c8b-4e5a-907d-02963fe171ef\") " pod="calico-system/calico-node-ckqs8" Nov 23 23:05:35.197192 kubelet[2682]: E1123 23:05:35.197075 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.197192 kubelet[2682]: W1123 23:05:35.197103 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.197192 kubelet[2682]: E1123 23:05:35.197131 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.202224 kubelet[2682]: E1123 23:05:35.202195 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:35.203628 containerd[1523]: time="2025-11-23T23:05:35.202729117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b47bd55b-g5w7n,Uid:062fe414-62cc-488f-ac95-d6301a9bd12b,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:35.230896 kubelet[2682]: E1123 23:05:35.230630 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:05:35.267916 containerd[1523]: time="2025-11-23T23:05:35.267856094Z" level=info msg="connecting to shim c8b043274f0825265d3c2619cb62fee255e3f1942e33c4301e5d8f8a4b6a729a" address="unix:///run/containerd/s/b4ce777beacfb5ed47d906fbcb9f7b4b4c8d812a15454ca1605dc7007da6cad2" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:35.277997 kubelet[2682]: E1123 23:05:35.277962 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.277997 kubelet[2682]: W1123 23:05:35.277987 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.278180 kubelet[2682]: E1123 23:05:35.278010 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.278214 kubelet[2682]: E1123 23:05:35.278189 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.288767 kubelet[2682]: W1123 23:05:35.278197 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.288767 kubelet[2682]: E1123 23:05:35.288775 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.289701 kubelet[2682]: E1123 23:05:35.289324 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.289701 kubelet[2682]: W1123 23:05:35.289341 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.289701 kubelet[2682]: E1123 23:05:35.289358 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.290000 kubelet[2682]: E1123 23:05:35.289980 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.290000 kubelet[2682]: W1123 23:05:35.289997 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.290085 kubelet[2682]: E1123 23:05:35.290011 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.290586 kubelet[2682]: E1123 23:05:35.290471 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.290821 kubelet[2682]: W1123 23:05:35.290491 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.290821 kubelet[2682]: E1123 23:05:35.290773 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.291200 kubelet[2682]: E1123 23:05:35.291178 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.291200 kubelet[2682]: W1123 23:05:35.291196 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.291302 kubelet[2682]: E1123 23:05:35.291209 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.291725 kubelet[2682]: E1123 23:05:35.291697 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.291725 kubelet[2682]: W1123 23:05:35.291713 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.291725 kubelet[2682]: E1123 23:05:35.291725 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.292595 kubelet[2682]: E1123 23:05:35.292577 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.292595 kubelet[2682]: W1123 23:05:35.292595 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.292663 kubelet[2682]: E1123 23:05:35.292608 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.293456 kubelet[2682]: E1123 23:05:35.293426 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.293456 kubelet[2682]: W1123 23:05:35.293445 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.293809 kubelet[2682]: E1123 23:05:35.293471 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.296195 kubelet[2682]: E1123 23:05:35.296116 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.296195 kubelet[2682]: W1123 23:05:35.296146 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.296195 kubelet[2682]: E1123 23:05:35.296165 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.298533 kubelet[2682]: E1123 23:05:35.297133 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.298533 kubelet[2682]: W1123 23:05:35.297157 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.298533 kubelet[2682]: E1123 23:05:35.297174 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.298736 kubelet[2682]: E1123 23:05:35.298602 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.298736 kubelet[2682]: W1123 23:05:35.298618 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.298736 kubelet[2682]: E1123 23:05:35.298633 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.299013 kubelet[2682]: E1123 23:05:35.298992 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.299013 kubelet[2682]: W1123 23:05:35.299008 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.299212 kubelet[2682]: E1123 23:05:35.299020 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.299823 kubelet[2682]: E1123 23:05:35.299801 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.299823 kubelet[2682]: W1123 23:05:35.299818 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.300042 kubelet[2682]: E1123 23:05:35.299832 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.300806 kubelet[2682]: E1123 23:05:35.300783 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.300806 kubelet[2682]: W1123 23:05:35.300800 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.301041 kubelet[2682]: E1123 23:05:35.300814 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.302642 kubelet[2682]: E1123 23:05:35.302573 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.302642 kubelet[2682]: W1123 23:05:35.302592 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.302642 kubelet[2682]: E1123 23:05:35.302607 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.303618 kubelet[2682]: E1123 23:05:35.303596 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.303618 kubelet[2682]: W1123 23:05:35.303615 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.303706 kubelet[2682]: E1123 23:05:35.303630 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.304668 kubelet[2682]: E1123 23:05:35.304580 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.304668 kubelet[2682]: W1123 23:05:35.304597 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.304668 kubelet[2682]: E1123 23:05:35.304611 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.305291 kubelet[2682]: E1123 23:05:35.305240 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.305291 kubelet[2682]: W1123 23:05:35.305262 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.305291 kubelet[2682]: E1123 23:05:35.305287 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.306467 kubelet[2682]: E1123 23:05:35.306169 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.306467 kubelet[2682]: W1123 23:05:35.306457 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.306578 kubelet[2682]: E1123 23:05:35.306475 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.308326 kubelet[2682]: E1123 23:05:35.308301 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.308326 kubelet[2682]: W1123 23:05:35.308320 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.308447 kubelet[2682]: E1123 23:05:35.308340 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.308447 kubelet[2682]: I1123 23:05:35.308375 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06591145-f7c8-4eb9-86a0-ddb163a9822f-kubelet-dir\") pod \"csi-node-driver-pdb4s\" (UID: \"06591145-f7c8-4eb9-86a0-ddb163a9822f\") " pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:35.308649 kubelet[2682]: E1123 23:05:35.308621 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.308649 kubelet[2682]: W1123 23:05:35.308639 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.308714 kubelet[2682]: E1123 23:05:35.308657 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.308714 kubelet[2682]: I1123 23:05:35.308674 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06591145-f7c8-4eb9-86a0-ddb163a9822f-registration-dir\") pod \"csi-node-driver-pdb4s\" (UID: \"06591145-f7c8-4eb9-86a0-ddb163a9822f\") " pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:35.309207 kubelet[2682]: E1123 23:05:35.309058 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.309207 kubelet[2682]: W1123 23:05:35.309097 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.309207 kubelet[2682]: E1123 23:05:35.309113 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.309207 kubelet[2682]: I1123 23:05:35.309134 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06591145-f7c8-4eb9-86a0-ddb163a9822f-varrun\") pod \"csi-node-driver-pdb4s\" (UID: \"06591145-f7c8-4eb9-86a0-ddb163a9822f\") " pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:35.309716 kubelet[2682]: E1123 23:05:35.309689 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.309716 kubelet[2682]: W1123 23:05:35.309707 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.309816 kubelet[2682]: E1123 23:05:35.309732 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.310683 kubelet[2682]: E1123 23:05:35.310659 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.310683 kubelet[2682]: W1123 23:05:35.310674 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.310850 kubelet[2682]: E1123 23:05:35.310779 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.311409 kubelet[2682]: E1123 23:05:35.310939 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.311409 kubelet[2682]: W1123 23:05:35.310952 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.311409 kubelet[2682]: E1123 23:05:35.311096 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.311409 kubelet[2682]: W1123 23:05:35.311105 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.311409 kubelet[2682]: E1123 23:05:35.311346 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.311409 kubelet[2682]: E1123 23:05:35.311370 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.311409 kubelet[2682]: E1123 23:05:35.311378 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.311409 kubelet[2682]: W1123 23:05:35.311382 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.311409 kubelet[2682]: E1123 23:05:35.311408 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.311688 kubelet[2682]: I1123 23:05:35.311432 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bjmp\" (UniqueName: \"kubernetes.io/projected/06591145-f7c8-4eb9-86a0-ddb163a9822f-kube-api-access-7bjmp\") pod \"csi-node-driver-pdb4s\" (UID: \"06591145-f7c8-4eb9-86a0-ddb163a9822f\") " pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:35.311688 kubelet[2682]: E1123 23:05:35.311660 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.311688 kubelet[2682]: W1123 23:05:35.311674 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.312527 kubelet[2682]: E1123 23:05:35.311794 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.312632 kubelet[2682]: E1123 23:05:35.312556 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.312632 kubelet[2682]: W1123 23:05:35.312573 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.312632 kubelet[2682]: E1123 23:05:35.312590 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.313121 kubelet[2682]: E1123 23:05:35.313045 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.313121 kubelet[2682]: W1123 23:05:35.313062 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.313121 kubelet[2682]: E1123 23:05:35.313092 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.313238 kubelet[2682]: I1123 23:05:35.313128 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06591145-f7c8-4eb9-86a0-ddb163a9822f-socket-dir\") pod \"csi-node-driver-pdb4s\" (UID: \"06591145-f7c8-4eb9-86a0-ddb163a9822f\") " pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:35.313840 kubelet[2682]: E1123 23:05:35.313636 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.313840 kubelet[2682]: W1123 23:05:35.313838 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.313916 kubelet[2682]: E1123 23:05:35.313865 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.314523 kubelet[2682]: E1123 23:05:35.314474 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.314810 kubelet[2682]: W1123 23:05:35.314777 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.314869 kubelet[2682]: E1123 23:05:35.314815 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.315192 kubelet[2682]: E1123 23:05:35.315171 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.315192 kubelet[2682]: W1123 23:05:35.315189 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.315271 kubelet[2682]: E1123 23:05:35.315203 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.315377 kubelet[2682]: E1123 23:05:35.315362 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.315377 kubelet[2682]: W1123 23:05:35.315373 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.315440 kubelet[2682]: E1123 23:05:35.315382 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.333999 kubelet[2682]: E1123 23:05:35.333961 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:35.334484 containerd[1523]: time="2025-11-23T23:05:35.334436441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckqs8,Uid:a5785041-5c8b-4e5a-907d-02963fe171ef,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:35.338132 systemd[1]: Started cri-containerd-c8b043274f0825265d3c2619cb62fee255e3f1942e33c4301e5d8f8a4b6a729a.scope - libcontainer container c8b043274f0825265d3c2619cb62fee255e3f1942e33c4301e5d8f8a4b6a729a. Nov 23 23:05:35.414806 kubelet[2682]: E1123 23:05:35.414763 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.414806 kubelet[2682]: W1123 23:05:35.414791 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.415070 kubelet[2682]: E1123 23:05:35.414818 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.416681 kubelet[2682]: E1123 23:05:35.416647 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.416681 kubelet[2682]: W1123 23:05:35.416672 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.416835 kubelet[2682]: E1123 23:05:35.416703 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.416967 kubelet[2682]: E1123 23:05:35.416951 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.416967 kubelet[2682]: W1123 23:05:35.416965 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.417061 kubelet[2682]: E1123 23:05:35.417040 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.417292 kubelet[2682]: E1123 23:05:35.417268 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.417292 kubelet[2682]: W1123 23:05:35.417285 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.417370 kubelet[2682]: E1123 23:05:35.417323 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.417487 kubelet[2682]: E1123 23:05:35.417467 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.417487 kubelet[2682]: W1123 23:05:35.417479 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.417487 kubelet[2682]: E1123 23:05:35.417506 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.418809 kubelet[2682]: E1123 23:05:35.418783 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.418809 kubelet[2682]: W1123 23:05:35.418806 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.418874 kubelet[2682]: E1123 23:05:35.418831 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.419779 kubelet[2682]: E1123 23:05:35.419755 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.419779 kubelet[2682]: W1123 23:05:35.419775 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.419960 kubelet[2682]: E1123 23:05:35.419869 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.420060 kubelet[2682]: E1123 23:05:35.420047 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.420060 kubelet[2682]: W1123 23:05:35.420058 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.420165 kubelet[2682]: E1123 23:05:35.420142 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.420363 kubelet[2682]: E1123 23:05:35.420343 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.420363 kubelet[2682]: W1123 23:05:35.420357 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.420473 kubelet[2682]: E1123 23:05:35.420453 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.421665 kubelet[2682]: E1123 23:05:35.421635 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.421770 kubelet[2682]: W1123 23:05:35.421748 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.421770 kubelet[2682]: E1123 23:05:35.421797 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.422169 kubelet[2682]: E1123 23:05:35.422143 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.422169 kubelet[2682]: W1123 23:05:35.422162 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.422433 kubelet[2682]: E1123 23:05:35.422207 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.423475 kubelet[2682]: E1123 23:05:35.423450 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.423475 kubelet[2682]: W1123 23:05:35.423467 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.424048 kubelet[2682]: E1123 23:05:35.423548 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.424048 kubelet[2682]: E1123 23:05:35.423861 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.424048 kubelet[2682]: W1123 23:05:35.423871 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.425543 kubelet[2682]: E1123 23:05:35.424170 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.425543 kubelet[2682]: E1123 23:05:35.424177 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.425543 kubelet[2682]: W1123 23:05:35.424214 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.425543 kubelet[2682]: E1123 23:05:35.424266 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.425543 kubelet[2682]: E1123 23:05:35.424636 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.425543 kubelet[2682]: W1123 23:05:35.424650 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.425543 kubelet[2682]: E1123 23:05:35.424691 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.426134 kubelet[2682]: E1123 23:05:35.425615 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.426134 kubelet[2682]: W1123 23:05:35.425637 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.426134 kubelet[2682]: E1123 23:05:35.425661 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.427655 kubelet[2682]: E1123 23:05:35.427628 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.427655 kubelet[2682]: W1123 23:05:35.427650 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.427846 kubelet[2682]: E1123 23:05:35.427745 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.428762 kubelet[2682]: E1123 23:05:35.428737 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.428762 kubelet[2682]: W1123 23:05:35.428759 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.430756 kubelet[2682]: E1123 23:05:35.429120 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.430756 kubelet[2682]: E1123 23:05:35.430734 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.430756 kubelet[2682]: W1123 23:05:35.430755 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.430951 kubelet[2682]: E1123 23:05:35.430858 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.431037 containerd[1523]: time="2025-11-23T23:05:35.430992547Z" level=info msg="connecting to shim 2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6" address="unix:///run/containerd/s/2de7c842ef3662d36cf1549ef598b95103409fed0096a4225e358157c04a0912" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:35.432639 kubelet[2682]: E1123 23:05:35.432604 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.432639 kubelet[2682]: W1123 23:05:35.432631 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.432863 kubelet[2682]: E1123 23:05:35.432811 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.434287 kubelet[2682]: E1123 23:05:35.434258 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.434287 kubelet[2682]: W1123 23:05:35.434277 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.434465 kubelet[2682]: E1123 23:05:35.434442 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.436275 kubelet[2682]: E1123 23:05:35.435770 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.436275 kubelet[2682]: W1123 23:05:35.436237 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.436388 kubelet[2682]: E1123 23:05:35.436330 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.436714 kubelet[2682]: E1123 23:05:35.436678 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.436714 kubelet[2682]: W1123 23:05:35.436710 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.436836 kubelet[2682]: E1123 23:05:35.436725 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.436930 kubelet[2682]: E1123 23:05:35.436906 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.436930 kubelet[2682]: W1123 23:05:35.436921 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.436930 kubelet[2682]: E1123 23:05:35.436932 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.438543 kubelet[2682]: E1123 23:05:35.438383 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.438543 kubelet[2682]: W1123 23:05:35.438404 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.438543 kubelet[2682]: E1123 23:05:35.438422 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.441656 containerd[1523]: time="2025-11-23T23:05:35.441609217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b47bd55b-g5w7n,Uid:062fe414-62cc-488f-ac95-d6301a9bd12b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8b043274f0825265d3c2619cb62fee255e3f1942e33c4301e5d8f8a4b6a729a\"" Nov 23 23:05:35.444235 kubelet[2682]: E1123 23:05:35.444205 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:35.445215 containerd[1523]: time="2025-11-23T23:05:35.445169259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:05:35.456311 kubelet[2682]: E1123 23:05:35.455913 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:35.456311 kubelet[2682]: W1123 23:05:35.455938 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:35.456311 kubelet[2682]: E1123 23:05:35.455958 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:35.479747 systemd[1]: Started cri-containerd-2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6.scope - libcontainer container 2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6. Nov 23 23:05:35.508828 containerd[1523]: time="2025-11-23T23:05:35.508781370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckqs8,Uid:a5785041-5c8b-4e5a-907d-02963fe171ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\"" Nov 23 23:05:35.509628 kubelet[2682]: E1123 23:05:35.509607 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:36.339768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681675205.mount: Deactivated successfully. Nov 23 23:05:37.047266 containerd[1523]: time="2025-11-23T23:05:37.047166269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:37.048536 containerd[1523]: time="2025-11-23T23:05:37.048461564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:05:37.049634 containerd[1523]: time="2025-11-23T23:05:37.049590696Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:37.052526 containerd[1523]: time="2025-11-23T23:05:37.052385019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:37.053634 containerd[1523]: time="2025-11-23T23:05:37.053162460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.607558758s" Nov 23 23:05:37.053634 containerd[1523]: time="2025-11-23T23:05:37.053202910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:05:37.058212 containerd[1523]: time="2025-11-23T23:05:37.057954779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:05:37.088118 containerd[1523]: time="2025-11-23T23:05:37.088073328Z" level=info msg="CreateContainer within sandbox \"c8b043274f0825265d3c2619cb62fee255e3f1942e33c4301e5d8f8a4b6a729a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:05:37.096880 containerd[1523]: time="2025-11-23T23:05:37.095960287Z" level=info msg="Container 8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:37.106122 containerd[1523]: time="2025-11-23T23:05:37.106082185Z" level=info msg="CreateContainer within sandbox \"c8b043274f0825265d3c2619cb62fee255e3f1942e33c4301e5d8f8a4b6a729a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958\"" Nov 23 23:05:37.106936 containerd[1523]: time="2025-11-23T23:05:37.106911239Z" level=info msg="StartContainer for \"8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958\"" Nov 23 23:05:37.108244 containerd[1523]: time="2025-11-23T23:05:37.108138557Z" level=info msg="connecting to shim 8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958" address="unix:///run/containerd/s/b4ce777beacfb5ed47d906fbcb9f7b4b4c8d812a15454ca1605dc7007da6cad2" protocol=ttrpc version=3 Nov 23 23:05:37.129761 systemd[1]: Started cri-containerd-8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958.scope - libcontainer container 8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958. Nov 23 23:05:37.237664 containerd[1523]: time="2025-11-23T23:05:37.237568348Z" level=info msg="StartContainer for \"8801adc269278cc7a4e5b69e141163a2ab374d293be92df564bc276398ba2958\" returns successfully" Nov 23 23:05:37.258524 kubelet[2682]: E1123 23:05:37.258281 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:05:37.342772 kubelet[2682]: E1123 23:05:37.341582 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:37.357216 kubelet[2682]: I1123 23:05:37.357148 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55b47bd55b-g5w7n" podStartSLOduration=1.744574599 podStartE2EDuration="3.357129107s" podCreationTimestamp="2025-11-23 23:05:34 +0000 UTC" firstStartedPulling="2025-11-23 23:05:35.444827603 +0000 UTC m=+21.289533509" lastFinishedPulling="2025-11-23 23:05:37.057382031 +0000 UTC m=+22.902088017" observedRunningTime="2025-11-23 23:05:37.354073477 +0000 UTC m=+23.198779423" watchObservedRunningTime="2025-11-23 23:05:37.357129107 +0000 UTC m=+23.201835053" Nov 23 23:05:37.423437 kubelet[2682]: E1123 23:05:37.423401 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.423437 kubelet[2682]: W1123 23:05:37.423426 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.428845 kubelet[2682]: E1123 23:05:37.428789 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.429635 kubelet[2682]: E1123 23:05:37.429603 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.429635 kubelet[2682]: W1123 23:05:37.429626 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.429756 kubelet[2682]: E1123 23:05:37.429657 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.430201 kubelet[2682]: E1123 23:05:37.430030 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.430201 kubelet[2682]: W1123 23:05:37.430047 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.430201 kubelet[2682]: E1123 23:05:37.430058 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.430399 kubelet[2682]: E1123 23:05:37.430371 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.430399 kubelet[2682]: W1123 23:05:37.430386 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.430520 kubelet[2682]: E1123 23:05:37.430397 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.430786 kubelet[2682]: E1123 23:05:37.430764 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.430896 kubelet[2682]: W1123 23:05:37.430874 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.430935 kubelet[2682]: E1123 23:05:37.430894 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.431752 kubelet[2682]: E1123 23:05:37.431724 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.431752 kubelet[2682]: W1123 23:05:37.431745 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.431846 kubelet[2682]: E1123 23:05:37.431760 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.432815 kubelet[2682]: E1123 23:05:37.432331 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.432815 kubelet[2682]: W1123 23:05:37.432347 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.432815 kubelet[2682]: E1123 23:05:37.432358 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.432815 kubelet[2682]: E1123 23:05:37.432565 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.432815 kubelet[2682]: W1123 23:05:37.432576 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.432815 kubelet[2682]: E1123 23:05:37.432586 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.435612 kubelet[2682]: E1123 23:05:37.435587 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.435612 kubelet[2682]: W1123 23:05:37.435606 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.435729 kubelet[2682]: E1123 23:05:37.435629 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.435938 kubelet[2682]: E1123 23:05:37.435917 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.435938 kubelet[2682]: W1123 23:05:37.435929 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.435938 kubelet[2682]: E1123 23:05:37.435937 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.436261 kubelet[2682]: E1123 23:05:37.436230 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.436261 kubelet[2682]: W1123 23:05:37.436242 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.436261 kubelet[2682]: E1123 23:05:37.436253 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.436947 kubelet[2682]: E1123 23:05:37.436484 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.436947 kubelet[2682]: W1123 23:05:37.436520 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.436947 kubelet[2682]: E1123 23:05:37.436530 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.437088 kubelet[2682]: E1123 23:05:37.437062 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.437088 kubelet[2682]: W1123 23:05:37.437078 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.437137 kubelet[2682]: E1123 23:05:37.437090 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.438281 kubelet[2682]: E1123 23:05:37.438247 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.438281 kubelet[2682]: W1123 23:05:37.438271 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.438382 kubelet[2682]: E1123 23:05:37.438287 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.438625 kubelet[2682]: E1123 23:05:37.438605 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.438625 kubelet[2682]: W1123 23:05:37.438623 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.438826 kubelet[2682]: E1123 23:05:37.438635 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.439275 kubelet[2682]: E1123 23:05:37.439257 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.439275 kubelet[2682]: W1123 23:05:37.439275 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.439343 kubelet[2682]: E1123 23:05:37.439291 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.439730 kubelet[2682]: E1123 23:05:37.439711 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.439730 kubelet[2682]: W1123 23:05:37.439729 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.439811 kubelet[2682]: E1123 23:05:37.439742 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.441224 kubelet[2682]: E1123 23:05:37.441194 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.441224 kubelet[2682]: W1123 23:05:37.441217 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.441361 kubelet[2682]: E1123 23:05:37.441234 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.441798 kubelet[2682]: E1123 23:05:37.441731 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.441798 kubelet[2682]: W1123 23:05:37.441749 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.441814 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.442032 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.443849 kubelet[2682]: W1123 23:05:37.442042 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.442343 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.442532 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.443849 kubelet[2682]: W1123 23:05:37.442544 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.442581 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.443669 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.443849 kubelet[2682]: W1123 23:05:37.443682 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.443849 kubelet[2682]: E1123 23:05:37.443701 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.444108 kubelet[2682]: E1123 23:05:37.443952 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.444108 kubelet[2682]: W1123 23:05:37.443964 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.444108 kubelet[2682]: E1123 23:05:37.444017 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.444667 kubelet[2682]: E1123 23:05:37.444638 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.444667 kubelet[2682]: W1123 23:05:37.444664 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.444748 kubelet[2682]: E1123 23:05:37.444716 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.444923 kubelet[2682]: E1123 23:05:37.444841 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.444923 kubelet[2682]: W1123 23:05:37.444855 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.444923 kubelet[2682]: E1123 23:05:37.444893 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.447001 kubelet[2682]: E1123 23:05:37.446980 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.447001 kubelet[2682]: W1123 23:05:37.446997 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.447092 kubelet[2682]: E1123 23:05:37.447013 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.447292 kubelet[2682]: E1123 23:05:37.447265 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.447292 kubelet[2682]: W1123 23:05:37.447283 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.447359 kubelet[2682]: E1123 23:05:37.447296 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.447540 kubelet[2682]: E1123 23:05:37.447523 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.447540 kubelet[2682]: W1123 23:05:37.447536 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.447604 kubelet[2682]: E1123 23:05:37.447551 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.447887 kubelet[2682]: E1123 23:05:37.447852 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.447887 kubelet[2682]: W1123 23:05:37.447872 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.447972 kubelet[2682]: E1123 23:05:37.447893 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.448579 kubelet[2682]: E1123 23:05:37.448556 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.448579 kubelet[2682]: W1123 23:05:37.448573 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.448677 kubelet[2682]: E1123 23:05:37.448594 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.448951 kubelet[2682]: E1123 23:05:37.448933 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.448951 kubelet[2682]: W1123 23:05:37.448947 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.449017 kubelet[2682]: E1123 23:05:37.448967 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.449416 kubelet[2682]: E1123 23:05:37.449222 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.449416 kubelet[2682]: W1123 23:05:37.449241 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.449416 kubelet[2682]: E1123 23:05:37.449261 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:37.449416 kubelet[2682]: E1123 23:05:37.449434 2682 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:05:37.449416 kubelet[2682]: W1123 23:05:37.449442 2682 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:05:37.449416 kubelet[2682]: E1123 23:05:37.449453 2682 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:05:38.108010 containerd[1523]: time="2025-11-23T23:05:38.107944832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:38.108443 containerd[1523]: time="2025-11-23T23:05:38.108109513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:05:38.109006 containerd[1523]: time="2025-11-23T23:05:38.108609797Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:38.111540 containerd[1523]: time="2025-11-23T23:05:38.111485191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:38.112416 containerd[1523]: time="2025-11-23T23:05:38.112033367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.054039018s" Nov 23 23:05:38.112416 containerd[1523]: time="2025-11-23T23:05:38.112063134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:05:38.115716 containerd[1523]: time="2025-11-23T23:05:38.115685033Z" level=info msg="CreateContainer within sandbox \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:05:38.125732 containerd[1523]: time="2025-11-23T23:05:38.125672632Z" level=info msg="Container 019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:38.138385 containerd[1523]: time="2025-11-23T23:05:38.138323892Z" level=info msg="CreateContainer within sandbox \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58\"" Nov 23 23:05:38.140371 containerd[1523]: time="2025-11-23T23:05:38.140314026Z" level=info msg="StartContainer for \"019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58\"" Nov 23 23:05:38.155232 containerd[1523]: time="2025-11-23T23:05:38.155163831Z" level=info msg="connecting to shim 019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58" address="unix:///run/containerd/s/2de7c842ef3662d36cf1549ef598b95103409fed0096a4225e358157c04a0912" protocol=ttrpc version=3 Nov 23 23:05:38.196732 systemd[1]: Started cri-containerd-019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58.scope - libcontainer container 019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58. Nov 23 23:05:38.274077 containerd[1523]: time="2025-11-23T23:05:38.274024611Z" level=info msg="StartContainer for \"019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58\" returns successfully" Nov 23 23:05:38.293610 systemd[1]: cri-containerd-019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58.scope: Deactivated successfully. Nov 23 23:05:38.343176 containerd[1523]: time="2025-11-23T23:05:38.343120120Z" level=info msg="received container exit event container_id:\"019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58\" id:\"019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58\" pid:3383 exited_at:{seconds:1763939138 nanos:331830478}" Nov 23 23:05:38.345909 kubelet[2682]: I1123 23:05:38.345797 2682 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:05:38.346247 kubelet[2682]: E1123 23:05:38.346106 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:38.346893 kubelet[2682]: E1123 23:05:38.346672 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:38.395039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-019aca5746e9b4a66a25ad2c576438abdafa4fff88d6c4c2fd62fa10268d1c58-rootfs.mount: Deactivated successfully. Nov 23 23:05:39.257990 kubelet[2682]: E1123 23:05:39.257795 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:05:39.349486 kubelet[2682]: E1123 23:05:39.349455 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:39.351291 containerd[1523]: time="2025-11-23T23:05:39.351256864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:05:41.258028 kubelet[2682]: E1123 23:05:41.257752 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:05:42.320947 containerd[1523]: time="2025-11-23T23:05:42.320545490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:42.321300 containerd[1523]: time="2025-11-23T23:05:42.321222794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:05:42.322405 containerd[1523]: time="2025-11-23T23:05:42.322343552Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:42.324778 containerd[1523]: time="2025-11-23T23:05:42.324731780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:42.325356 containerd[1523]: time="2025-11-23T23:05:42.325324626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.974025953s" Nov 23 23:05:42.325388 containerd[1523]: time="2025-11-23T23:05:42.325362194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:05:42.328346 containerd[1523]: time="2025-11-23T23:05:42.328284696Z" level=info msg="CreateContainer within sandbox \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:05:42.348859 containerd[1523]: time="2025-11-23T23:05:42.348796378Z" level=info msg="Container 5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:42.358211 containerd[1523]: time="2025-11-23T23:05:42.358134483Z" level=info msg="CreateContainer within sandbox \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de\"" Nov 23 23:05:42.360047 containerd[1523]: time="2025-11-23T23:05:42.360019044Z" level=info msg="StartContainer for \"5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de\"" Nov 23 23:05:42.361918 containerd[1523]: time="2025-11-23T23:05:42.361890042Z" level=info msg="connecting to shim 5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de" address="unix:///run/containerd/s/2de7c842ef3662d36cf1549ef598b95103409fed0096a4225e358157c04a0912" protocol=ttrpc version=3 Nov 23 23:05:42.390757 systemd[1]: Started cri-containerd-5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de.scope - libcontainer container 5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de. Nov 23 23:05:42.470598 containerd[1523]: time="2025-11-23T23:05:42.470557509Z" level=info msg="StartContainer for \"5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de\" returns successfully" Nov 23 23:05:43.238414 systemd[1]: cri-containerd-5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de.scope: Deactivated successfully. Nov 23 23:05:43.239568 systemd[1]: cri-containerd-5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de.scope: Consumed 477ms CPU time, 176.9M memory peak, 3.1M read from disk, 165.9M written to disk. Nov 23 23:05:43.239849 containerd[1523]: time="2025-11-23T23:05:43.239804200Z" level=info msg="received container exit event container_id:\"5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de\" id:\"5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de\" pid:3443 exited_at:{seconds:1763939143 nanos:239531344}" Nov 23 23:05:43.258482 kubelet[2682]: E1123 23:05:43.258440 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:05:43.262102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d8f354faef58e5dd10510e44b76ff11a325a339b938da0d2001bb19cf7705de-rootfs.mount: Deactivated successfully. Nov 23 23:05:43.309619 kubelet[2682]: I1123 23:05:43.309558 2682 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:05:43.368320 kubelet[2682]: E1123 23:05:43.368251 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:43.372529 containerd[1523]: time="2025-11-23T23:05:43.371421435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:05:43.383242 kubelet[2682]: I1123 23:05:43.383194 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/583c9149-22f9-45c3-9bb3-3e5f60548c49-goldmane-key-pair\") pod \"goldmane-666569f655-mbtpf\" (UID: \"583c9149-22f9-45c3-9bb3-3e5f60548c49\") " pod="calico-system/goldmane-666569f655-mbtpf" Nov 23 23:05:43.383242 kubelet[2682]: I1123 23:05:43.383240 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6klw\" (UniqueName: \"kubernetes.io/projected/6ab77788-9835-4030-a5a9-a00cf34b7381-kube-api-access-h6klw\") pod \"coredns-668d6bf9bc-nv6cn\" (UID: \"6ab77788-9835-4030-a5a9-a00cf34b7381\") " pod="kube-system/coredns-668d6bf9bc-nv6cn" Nov 23 23:05:43.383402 kubelet[2682]: I1123 23:05:43.383259 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e65264-2f6f-4444-a160-2c94126a2e6d-config-volume\") pod \"coredns-668d6bf9bc-2mghm\" (UID: \"b8e65264-2f6f-4444-a160-2c94126a2e6d\") " pod="kube-system/coredns-668d6bf9bc-2mghm" Nov 23 23:05:43.383402 kubelet[2682]: I1123 23:05:43.383294 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnxw\" (UniqueName: \"kubernetes.io/projected/9258d3b7-8de8-4b94-bd45-9195727d4ddb-kube-api-access-rdnxw\") pod \"calico-apiserver-57f95fbbd5-bxqch\" (UID: \"9258d3b7-8de8-4b94-bd45-9195727d4ddb\") " pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" Nov 23 23:05:43.383402 kubelet[2682]: I1123 23:05:43.383311 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/583c9149-22f9-45c3-9bb3-3e5f60548c49-goldmane-ca-bundle\") pod \"goldmane-666569f655-mbtpf\" (UID: \"583c9149-22f9-45c3-9bb3-3e5f60548c49\") " pod="calico-system/goldmane-666569f655-mbtpf" Nov 23 23:05:43.383402 kubelet[2682]: I1123 23:05:43.383330 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9nr8\" (UniqueName: \"kubernetes.io/projected/583c9149-22f9-45c3-9bb3-3e5f60548c49-kube-api-access-l9nr8\") pod \"goldmane-666569f655-mbtpf\" (UID: \"583c9149-22f9-45c3-9bb3-3e5f60548c49\") " pod="calico-system/goldmane-666569f655-mbtpf" Nov 23 23:05:43.383402 kubelet[2682]: I1123 23:05:43.383346 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gts\" (UniqueName: \"kubernetes.io/projected/34a9e722-c2ac-40a8-8496-e24ef8260bba-kube-api-access-q5gts\") pod \"calico-kube-controllers-6f9f955d8c-k5vtv\" (UID: \"34a9e722-c2ac-40a8-8496-e24ef8260bba\") " pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" Nov 23 23:05:43.383539 kubelet[2682]: I1123 23:05:43.383362 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ab77788-9835-4030-a5a9-a00cf34b7381-config-volume\") pod \"coredns-668d6bf9bc-nv6cn\" (UID: \"6ab77788-9835-4030-a5a9-a00cf34b7381\") " pod="kube-system/coredns-668d6bf9bc-nv6cn" Nov 23 23:05:43.383539 kubelet[2682]: I1123 23:05:43.383380 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4987122a-6b7a-47b0-a501-e31f8cdc6bd8-calico-apiserver-certs\") pod \"calico-apiserver-865b864c6b-zwbsd\" (UID: \"4987122a-6b7a-47b0-a501-e31f8cdc6bd8\") " pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" Nov 23 23:05:43.383539 kubelet[2682]: I1123 23:05:43.383407 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9258d3b7-8de8-4b94-bd45-9195727d4ddb-calico-apiserver-certs\") pod \"calico-apiserver-57f95fbbd5-bxqch\" (UID: \"9258d3b7-8de8-4b94-bd45-9195727d4ddb\") " pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" Nov 23 23:05:43.383539 kubelet[2682]: I1123 23:05:43.383422 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a9e722-c2ac-40a8-8496-e24ef8260bba-tigera-ca-bundle\") pod \"calico-kube-controllers-6f9f955d8c-k5vtv\" (UID: \"34a9e722-c2ac-40a8-8496-e24ef8260bba\") " pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" Nov 23 23:05:43.383539 kubelet[2682]: I1123 23:05:43.383442 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54kxm\" (UniqueName: \"kubernetes.io/projected/4987122a-6b7a-47b0-a501-e31f8cdc6bd8-kube-api-access-54kxm\") pod \"calico-apiserver-865b864c6b-zwbsd\" (UID: \"4987122a-6b7a-47b0-a501-e31f8cdc6bd8\") " pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" Nov 23 23:05:43.385520 kubelet[2682]: I1123 23:05:43.385336 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583c9149-22f9-45c3-9bb3-3e5f60548c49-config\") pod \"goldmane-666569f655-mbtpf\" (UID: \"583c9149-22f9-45c3-9bb3-3e5f60548c49\") " pod="calico-system/goldmane-666569f655-mbtpf" Nov 23 23:05:43.388091 kubelet[2682]: I1123 23:05:43.385661 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntdb\" (UniqueName: \"kubernetes.io/projected/b8e65264-2f6f-4444-a160-2c94126a2e6d-kube-api-access-gntdb\") pod \"coredns-668d6bf9bc-2mghm\" (UID: \"b8e65264-2f6f-4444-a160-2c94126a2e6d\") " pod="kube-system/coredns-668d6bf9bc-2mghm" Nov 23 23:05:43.388440 systemd[1]: Created slice kubepods-burstable-podb8e65264_2f6f_4444_a160_2c94126a2e6d.slice - libcontainer container kubepods-burstable-podb8e65264_2f6f_4444_a160_2c94126a2e6d.slice. Nov 23 23:05:43.397078 systemd[1]: Created slice kubepods-besteffort-pod34a9e722_c2ac_40a8_8496_e24ef8260bba.slice - libcontainer container kubepods-besteffort-pod34a9e722_c2ac_40a8_8496_e24ef8260bba.slice. Nov 23 23:05:43.405684 systemd[1]: Created slice kubepods-burstable-pod6ab77788_9835_4030_a5a9_a00cf34b7381.slice - libcontainer container kubepods-burstable-pod6ab77788_9835_4030_a5a9_a00cf34b7381.slice. Nov 23 23:05:43.437456 systemd[1]: Created slice kubepods-besteffort-pod9258d3b7_8de8_4b94_bd45_9195727d4ddb.slice - libcontainer container kubepods-besteffort-pod9258d3b7_8de8_4b94_bd45_9195727d4ddb.slice. Nov 23 23:05:43.446026 systemd[1]: Created slice kubepods-besteffort-pod4987122a_6b7a_47b0_a501_e31f8cdc6bd8.slice - libcontainer container kubepods-besteffort-pod4987122a_6b7a_47b0_a501_e31f8cdc6bd8.slice. Nov 23 23:05:43.451421 systemd[1]: Created slice kubepods-besteffort-pod583c9149_22f9_45c3_9bb3_3e5f60548c49.slice - libcontainer container kubepods-besteffort-pod583c9149_22f9_45c3_9bb3_3e5f60548c49.slice. Nov 23 23:05:43.458329 systemd[1]: Created slice kubepods-besteffort-podbab07697_3b96_415b_b1d7_632329a49d75.slice - libcontainer container kubepods-besteffort-podbab07697_3b96_415b_b1d7_632329a49d75.slice. Nov 23 23:05:43.466817 systemd[1]: Created slice kubepods-besteffort-podd359e43e_e2d0_4171_8ab1_f199934fc8c6.slice - libcontainer container kubepods-besteffort-podd359e43e_e2d0_4171_8ab1_f199934fc8c6.slice. Nov 23 23:05:43.488558 kubelet[2682]: I1123 23:05:43.486823 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxd4h\" (UniqueName: \"kubernetes.io/projected/d359e43e-e2d0-4171-8ab1-f199934fc8c6-kube-api-access-xxd4h\") pod \"whisker-5bb8654ddd-m7j2j\" (UID: \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\") " pod="calico-system/whisker-5bb8654ddd-m7j2j" Nov 23 23:05:43.488792 kubelet[2682]: I1123 23:05:43.488769 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-backend-key-pair\") pod \"whisker-5bb8654ddd-m7j2j\" (UID: \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\") " pod="calico-system/whisker-5bb8654ddd-m7j2j" Nov 23 23:05:43.488914 kubelet[2682]: I1123 23:05:43.488900 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-ca-bundle\") pod \"whisker-5bb8654ddd-m7j2j\" (UID: \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\") " pod="calico-system/whisker-5bb8654ddd-m7j2j" Nov 23 23:05:43.489041 kubelet[2682]: I1123 23:05:43.489026 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkj5h\" (UniqueName: \"kubernetes.io/projected/bab07697-3b96-415b-b1d7-632329a49d75-kube-api-access-vkj5h\") pod \"calico-apiserver-865b864c6b-p6bp2\" (UID: \"bab07697-3b96-415b-b1d7-632329a49d75\") " pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" Nov 23 23:05:43.489136 kubelet[2682]: I1123 23:05:43.489120 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bab07697-3b96-415b-b1d7-632329a49d75-calico-apiserver-certs\") pod \"calico-apiserver-865b864c6b-p6bp2\" (UID: \"bab07697-3b96-415b-b1d7-632329a49d75\") " pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" Nov 23 23:05:43.700314 kubelet[2682]: E1123 23:05:43.700262 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:43.701148 containerd[1523]: time="2025-11-23T23:05:43.701099452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mghm,Uid:b8e65264-2f6f-4444-a160-2c94126a2e6d,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:43.703230 containerd[1523]: time="2025-11-23T23:05:43.703185920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9f955d8c-k5vtv,Uid:34a9e722-c2ac-40a8-8496-e24ef8260bba,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:43.712779 kubelet[2682]: E1123 23:05:43.712741 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:43.717552 containerd[1523]: time="2025-11-23T23:05:43.716580107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nv6cn,Uid:6ab77788-9835-4030-a5a9-a00cf34b7381,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:43.742485 containerd[1523]: time="2025-11-23T23:05:43.742348992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f95fbbd5-bxqch,Uid:9258d3b7-8de8-4b94-bd45-9195727d4ddb,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:05:43.749803 containerd[1523]: time="2025-11-23T23:05:43.749768714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-zwbsd,Uid:4987122a-6b7a-47b0-a501-e31f8cdc6bd8,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:05:43.755649 containerd[1523]: time="2025-11-23T23:05:43.755612073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbtpf,Uid:583c9149-22f9-45c3-9bb3-3e5f60548c49,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:43.766400 containerd[1523]: time="2025-11-23T23:05:43.766117027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-p6bp2,Uid:bab07697-3b96-415b-b1d7-632329a49d75,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:05:43.770886 containerd[1523]: time="2025-11-23T23:05:43.770847437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb8654ddd-m7j2j,Uid:d359e43e-e2d0-4171-8ab1-f199934fc8c6,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:43.833697 containerd[1523]: time="2025-11-23T23:05:43.833641237Z" level=error msg="Failed to destroy network for sandbox \"f0768804588ac0325b239bd9a9e6b45566a94682e1b0502438fb69c375bb76f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.835100 containerd[1523]: time="2025-11-23T23:05:43.835037163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nv6cn,Uid:6ab77788-9835-4030-a5a9-a00cf34b7381,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0768804588ac0325b239bd9a9e6b45566a94682e1b0502438fb69c375bb76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.836102 kubelet[2682]: E1123 23:05:43.835963 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0768804588ac0325b239bd9a9e6b45566a94682e1b0502438fb69c375bb76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.836102 kubelet[2682]: E1123 23:05:43.836064 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0768804588ac0325b239bd9a9e6b45566a94682e1b0502438fb69c375bb76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nv6cn" Nov 23 23:05:43.836322 kubelet[2682]: E1123 23:05:43.836236 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0768804588ac0325b239bd9a9e6b45566a94682e1b0502438fb69c375bb76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nv6cn" Nov 23 23:05:43.836422 kubelet[2682]: E1123 23:05:43.836394 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nv6cn_kube-system(6ab77788-9835-4030-a5a9-a00cf34b7381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nv6cn_kube-system(6ab77788-9835-4030-a5a9-a00cf34b7381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0768804588ac0325b239bd9a9e6b45566a94682e1b0502438fb69c375bb76f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nv6cn" podUID="6ab77788-9835-4030-a5a9-a00cf34b7381" Nov 23 23:05:43.847909 containerd[1523]: time="2025-11-23T23:05:43.847843229Z" level=error msg="Failed to destroy network for sandbox \"a495a4e6181f1b1b0b9be75bfa4fdbe94b143c737dfda1054b62a633a345e6b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.848614 containerd[1523]: time="2025-11-23T23:05:43.848173177Z" level=error msg="Failed to destroy network for sandbox \"37ea5c29f9099d15b9a4a9ef4dc6a768f01a2c4efd3ae5f9d1d9b57e22e51ec8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.849685 containerd[1523]: time="2025-11-23T23:05:43.849637077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9f955d8c-k5vtv,Uid:34a9e722-c2ac-40a8-8496-e24ef8260bba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a495a4e6181f1b1b0b9be75bfa4fdbe94b143c737dfda1054b62a633a345e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.849946 kubelet[2682]: E1123 23:05:43.849905 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a495a4e6181f1b1b0b9be75bfa4fdbe94b143c737dfda1054b62a633a345e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.850001 kubelet[2682]: E1123 23:05:43.849968 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a495a4e6181f1b1b0b9be75bfa4fdbe94b143c737dfda1054b62a633a345e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" Nov 23 23:05:43.850001 kubelet[2682]: E1123 23:05:43.849987 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a495a4e6181f1b1b0b9be75bfa4fdbe94b143c737dfda1054b62a633a345e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" Nov 23 23:05:43.850061 kubelet[2682]: E1123 23:05:43.850023 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9f955d8c-k5vtv_calico-system(34a9e722-c2ac-40a8-8496-e24ef8260bba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9f955d8c-k5vtv_calico-system(34a9e722-c2ac-40a8-8496-e24ef8260bba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a495a4e6181f1b1b0b9be75bfa4fdbe94b143c737dfda1054b62a633a345e6b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" podUID="34a9e722-c2ac-40a8-8496-e24ef8260bba" Nov 23 23:05:43.850935 containerd[1523]: time="2025-11-23T23:05:43.850780192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mghm,Uid:b8e65264-2f6f-4444-a160-2c94126a2e6d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ea5c29f9099d15b9a4a9ef4dc6a768f01a2c4efd3ae5f9d1d9b57e22e51ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.851366 kubelet[2682]: E1123 23:05:43.851305 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ea5c29f9099d15b9a4a9ef4dc6a768f01a2c4efd3ae5f9d1d9b57e22e51ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.851433 kubelet[2682]: E1123 23:05:43.851373 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ea5c29f9099d15b9a4a9ef4dc6a768f01a2c4efd3ae5f9d1d9b57e22e51ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2mghm" Nov 23 23:05:43.851433 kubelet[2682]: E1123 23:05:43.851394 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37ea5c29f9099d15b9a4a9ef4dc6a768f01a2c4efd3ae5f9d1d9b57e22e51ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2mghm" Nov 23 23:05:43.851561 kubelet[2682]: E1123 23:05:43.851430 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2mghm_kube-system(b8e65264-2f6f-4444-a160-2c94126a2e6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2mghm_kube-system(b8e65264-2f6f-4444-a160-2c94126a2e6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37ea5c29f9099d15b9a4a9ef4dc6a768f01a2c4efd3ae5f9d1d9b57e22e51ec8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2mghm" podUID="b8e65264-2f6f-4444-a160-2c94126a2e6d" Nov 23 23:05:43.866382 containerd[1523]: time="2025-11-23T23:05:43.866330341Z" level=error msg="Failed to destroy network for sandbox \"83f74cc8242fbc710e64bba0acba2843259f042c8b7e9cacc2bad893932bd72e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.867773 containerd[1523]: time="2025-11-23T23:05:43.867709544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-zwbsd,Uid:4987122a-6b7a-47b0-a501-e31f8cdc6bd8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f74cc8242fbc710e64bba0acba2843259f042c8b7e9cacc2bad893932bd72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.868146 kubelet[2682]: E1123 23:05:43.868106 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f74cc8242fbc710e64bba0acba2843259f042c8b7e9cacc2bad893932bd72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.868380 kubelet[2682]: E1123 23:05:43.868255 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f74cc8242fbc710e64bba0acba2843259f042c8b7e9cacc2bad893932bd72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" Nov 23 23:05:43.868380 kubelet[2682]: E1123 23:05:43.868285 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f74cc8242fbc710e64bba0acba2843259f042c8b7e9cacc2bad893932bd72e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" Nov 23 23:05:43.868380 kubelet[2682]: E1123 23:05:43.868346 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-865b864c6b-zwbsd_calico-apiserver(4987122a-6b7a-47b0-a501-e31f8cdc6bd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-865b864c6b-zwbsd_calico-apiserver(4987122a-6b7a-47b0-a501-e31f8cdc6bd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83f74cc8242fbc710e64bba0acba2843259f042c8b7e9cacc2bad893932bd72e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" podUID="4987122a-6b7a-47b0-a501-e31f8cdc6bd8" Nov 23 23:05:43.868903 containerd[1523]: time="2025-11-23T23:05:43.868858100Z" level=error msg="Failed to destroy network for sandbox \"8e4fba50fca1c3f6f775cbe9a294b6fbc19788789701a6c1a7fafa4b8a311aa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.871755 containerd[1523]: time="2025-11-23T23:05:43.871692321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f95fbbd5-bxqch,Uid:9258d3b7-8de8-4b94-bd45-9195727d4ddb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4fba50fca1c3f6f775cbe9a294b6fbc19788789701a6c1a7fafa4b8a311aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.872929 kubelet[2682]: E1123 23:05:43.872890 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4fba50fca1c3f6f775cbe9a294b6fbc19788789701a6c1a7fafa4b8a311aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.873034 kubelet[2682]: E1123 23:05:43.872949 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4fba50fca1c3f6f775cbe9a294b6fbc19788789701a6c1a7fafa4b8a311aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" Nov 23 23:05:43.873034 kubelet[2682]: E1123 23:05:43.872968 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e4fba50fca1c3f6f775cbe9a294b6fbc19788789701a6c1a7fafa4b8a311aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" Nov 23 23:05:43.873034 kubelet[2682]: E1123 23:05:43.873014 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57f95fbbd5-bxqch_calico-apiserver(9258d3b7-8de8-4b94-bd45-9195727d4ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57f95fbbd5-bxqch_calico-apiserver(9258d3b7-8de8-4b94-bd45-9195727d4ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e4fba50fca1c3f6f775cbe9a294b6fbc19788789701a6c1a7fafa4b8a311aa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" podUID="9258d3b7-8de8-4b94-bd45-9195727d4ddb" Nov 23 23:05:43.878902 containerd[1523]: time="2025-11-23T23:05:43.878857270Z" level=error msg="Failed to destroy network for sandbox \"f554247aabaadf1f5676da16a69623692a488518437047a11d1e76c55143ce81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.880252 containerd[1523]: time="2025-11-23T23:05:43.880214349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbtpf,Uid:583c9149-22f9-45c3-9bb3-3e5f60548c49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f554247aabaadf1f5676da16a69623692a488518437047a11d1e76c55143ce81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.880477 kubelet[2682]: E1123 23:05:43.880439 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f554247aabaadf1f5676da16a69623692a488518437047a11d1e76c55143ce81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.881219 kubelet[2682]: E1123 23:05:43.880516 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f554247aabaadf1f5676da16a69623692a488518437047a11d1e76c55143ce81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mbtpf" Nov 23 23:05:43.881259 kubelet[2682]: E1123 23:05:43.881231 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f554247aabaadf1f5676da16a69623692a488518437047a11d1e76c55143ce81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mbtpf" Nov 23 23:05:43.881319 kubelet[2682]: E1123 23:05:43.881289 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mbtpf_calico-system(583c9149-22f9-45c3-9bb3-3e5f60548c49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mbtpf_calico-system(583c9149-22f9-45c3-9bb3-3e5f60548c49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f554247aabaadf1f5676da16a69623692a488518437047a11d1e76c55143ce81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mbtpf" podUID="583c9149-22f9-45c3-9bb3-3e5f60548c49" Nov 23 23:05:43.885909 containerd[1523]: time="2025-11-23T23:05:43.885863467Z" level=error msg="Failed to destroy network for sandbox \"e1e5da434f01ea5d3602a8aa5904c738bf766fe86e7df96b7063854c99278909\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.887234 containerd[1523]: time="2025-11-23T23:05:43.887191140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb8654ddd-m7j2j,Uid:d359e43e-e2d0-4171-8ab1-f199934fc8c6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e5da434f01ea5d3602a8aa5904c738bf766fe86e7df96b7063854c99278909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.887487 kubelet[2682]: E1123 23:05:43.887426 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e5da434f01ea5d3602a8aa5904c738bf766fe86e7df96b7063854c99278909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.887569 kubelet[2682]: E1123 23:05:43.887530 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e5da434f01ea5d3602a8aa5904c738bf766fe86e7df96b7063854c99278909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bb8654ddd-m7j2j" Nov 23 23:05:43.887569 kubelet[2682]: E1123 23:05:43.887547 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e5da434f01ea5d3602a8aa5904c738bf766fe86e7df96b7063854c99278909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bb8654ddd-m7j2j" Nov 23 23:05:43.887569 kubelet[2682]: E1123 23:05:43.887591 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bb8654ddd-m7j2j_calico-system(d359e43e-e2d0-4171-8ab1-f199934fc8c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bb8654ddd-m7j2j_calico-system(d359e43e-e2d0-4171-8ab1-f199934fc8c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1e5da434f01ea5d3602a8aa5904c738bf766fe86e7df96b7063854c99278909\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bb8654ddd-m7j2j" podUID="d359e43e-e2d0-4171-8ab1-f199934fc8c6" Nov 23 23:05:43.890186 containerd[1523]: time="2025-11-23T23:05:43.890136264Z" level=error msg="Failed to destroy network for sandbox \"abe101201dbbd50866b83b203d8c950071ccafb7d5356c189d4e3be7918e8400\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.891835 containerd[1523]: time="2025-11-23T23:05:43.891769279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-p6bp2,Uid:bab07697-3b96-415b-b1d7-632329a49d75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe101201dbbd50866b83b203d8c950071ccafb7d5356c189d4e3be7918e8400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.892065 kubelet[2682]: E1123 23:05:43.892032 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe101201dbbd50866b83b203d8c950071ccafb7d5356c189d4e3be7918e8400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:43.892149 kubelet[2682]: E1123 23:05:43.892094 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe101201dbbd50866b83b203d8c950071ccafb7d5356c189d4e3be7918e8400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" Nov 23 23:05:43.892149 kubelet[2682]: E1123 23:05:43.892127 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abe101201dbbd50866b83b203d8c950071ccafb7d5356c189d4e3be7918e8400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" Nov 23 23:05:43.892304 kubelet[2682]: E1123 23:05:43.892170 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-865b864c6b-p6bp2_calico-apiserver(bab07697-3b96-415b-b1d7-632329a49d75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-865b864c6b-p6bp2_calico-apiserver(bab07697-3b96-415b-b1d7-632329a49d75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abe101201dbbd50866b83b203d8c950071ccafb7d5356c189d4e3be7918e8400\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" podUID="bab07697-3b96-415b-b1d7-632329a49d75" Nov 23 23:05:45.265794 systemd[1]: Created slice kubepods-besteffort-pod06591145_f7c8_4eb9_86a0_ddb163a9822f.slice - libcontainer container kubepods-besteffort-pod06591145_f7c8_4eb9_86a0_ddb163a9822f.slice. Nov 23 23:05:45.275681 containerd[1523]: time="2025-11-23T23:05:45.275585587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdb4s,Uid:06591145-f7c8-4eb9-86a0-ddb163a9822f,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:45.358593 containerd[1523]: time="2025-11-23T23:05:45.358534823Z" level=error msg="Failed to destroy network for sandbox \"3ec17a154031e35e9cedd37f288205525447d2cf4d2b69bc596d71d367283d0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:45.361261 systemd[1]: run-netns-cni\x2d0c2d0bd0\x2d03ae\x2dd48a\x2dfcb5\x2d843f0b75f1ec.mount: Deactivated successfully. Nov 23 23:05:45.403064 containerd[1523]: time="2025-11-23T23:05:45.402945723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdb4s,Uid:06591145-f7c8-4eb9-86a0-ddb163a9822f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec17a154031e35e9cedd37f288205525447d2cf4d2b69bc596d71d367283d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:45.403269 kubelet[2682]: E1123 23:05:45.403170 2682 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec17a154031e35e9cedd37f288205525447d2cf4d2b69bc596d71d367283d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:05:45.403269 kubelet[2682]: E1123 23:05:45.403239 2682 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec17a154031e35e9cedd37f288205525447d2cf4d2b69bc596d71d367283d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:45.403269 kubelet[2682]: E1123 23:05:45.403257 2682 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec17a154031e35e9cedd37f288205525447d2cf4d2b69bc596d71d367283d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pdb4s" Nov 23 23:05:45.404033 kubelet[2682]: E1123 23:05:45.403302 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pdb4s_calico-system(06591145-f7c8-4eb9-86a0-ddb163a9822f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pdb4s_calico-system(06591145-f7c8-4eb9-86a0-ddb163a9822f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ec17a154031e35e9cedd37f288205525447d2cf4d2b69bc596d71d367283d0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:05:46.609754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763719054.mount: Deactivated successfully. Nov 23 23:05:46.814165 containerd[1523]: time="2025-11-23T23:05:46.814109169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:46.815176 containerd[1523]: time="2025-11-23T23:05:46.814718682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:05:46.815832 containerd[1523]: time="2025-11-23T23:05:46.815802483Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:46.817774 containerd[1523]: time="2025-11-23T23:05:46.817737321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:46.818726 containerd[1523]: time="2025-11-23T23:05:46.818585278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.447120634s" Nov 23 23:05:46.818726 containerd[1523]: time="2025-11-23T23:05:46.818626205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:05:46.836205 containerd[1523]: time="2025-11-23T23:05:46.835776301Z" level=info msg="CreateContainer within sandbox \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:05:46.852586 containerd[1523]: time="2025-11-23T23:05:46.852530844Z" level=info msg="Container 1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:46.869330 containerd[1523]: time="2025-11-23T23:05:46.869284347Z" level=info msg="CreateContainer within sandbox \"2f33d3a3334f7611ecbc5da704c2fc6513675e3b5e0c4d45c0a337e0ca0e66c6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792\"" Nov 23 23:05:46.870155 containerd[1523]: time="2025-11-23T23:05:46.869967233Z" level=info msg="StartContainer for \"1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792\"" Nov 23 23:05:46.871717 containerd[1523]: time="2025-11-23T23:05:46.871685351Z" level=info msg="connecting to shim 1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792" address="unix:///run/containerd/s/2de7c842ef3662d36cf1549ef598b95103409fed0096a4225e358157c04a0912" protocol=ttrpc version=3 Nov 23 23:05:46.891705 systemd[1]: Started cri-containerd-1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792.scope - libcontainer container 1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792. Nov 23 23:05:46.964726 containerd[1523]: time="2025-11-23T23:05:46.964673331Z" level=info msg="StartContainer for \"1c590c644c09731034e4d58aaa95c03515a51667c48d8b34e6e64794f7cfd792\" returns successfully" Nov 23 23:05:47.092327 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:05:47.092461 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:05:47.309932 kubelet[2682]: I1123 23:05:47.309820 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-backend-key-pair\") pod \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\" (UID: \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\") " Nov 23 23:05:47.311025 kubelet[2682]: I1123 23:05:47.309892 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-ca-bundle\") pod \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\" (UID: \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\") " Nov 23 23:05:47.311025 kubelet[2682]: I1123 23:05:47.310786 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxd4h\" (UniqueName: \"kubernetes.io/projected/d359e43e-e2d0-4171-8ab1-f199934fc8c6-kube-api-access-xxd4h\") pod \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\" (UID: \"d359e43e-e2d0-4171-8ab1-f199934fc8c6\") " Nov 23 23:05:47.311468 kubelet[2682]: I1123 23:05:47.311424 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d359e43e-e2d0-4171-8ab1-f199934fc8c6" (UID: "d359e43e-e2d0-4171-8ab1-f199934fc8c6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:05:47.315132 kubelet[2682]: I1123 23:05:47.315090 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d359e43e-e2d0-4171-8ab1-f199934fc8c6-kube-api-access-xxd4h" (OuterVolumeSpecName: "kube-api-access-xxd4h") pod "d359e43e-e2d0-4171-8ab1-f199934fc8c6" (UID: "d359e43e-e2d0-4171-8ab1-f199934fc8c6"). InnerVolumeSpecName "kube-api-access-xxd4h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:05:47.315790 kubelet[2682]: I1123 23:05:47.315587 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d359e43e-e2d0-4171-8ab1-f199934fc8c6" (UID: "d359e43e-e2d0-4171-8ab1-f199934fc8c6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:05:47.396536 kubelet[2682]: E1123 23:05:47.394705 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:47.406245 systemd[1]: Removed slice kubepods-besteffort-podd359e43e_e2d0_4171_8ab1_f199934fc8c6.slice - libcontainer container kubepods-besteffort-podd359e43e_e2d0_4171_8ab1_f199934fc8c6.slice. Nov 23 23:05:47.418546 kubelet[2682]: I1123 23:05:47.418488 2682 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 23 23:05:47.418546 kubelet[2682]: I1123 23:05:47.418535 2682 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d359e43e-e2d0-4171-8ab1-f199934fc8c6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 23 23:05:47.418546 kubelet[2682]: I1123 23:05:47.418549 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxd4h\" (UniqueName: \"kubernetes.io/projected/d359e43e-e2d0-4171-8ab1-f199934fc8c6-kube-api-access-xxd4h\") on node \"localhost\" DevicePath \"\"" Nov 23 23:05:47.440689 kubelet[2682]: I1123 23:05:47.440596 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ckqs8" podStartSLOduration=1.129293548 podStartE2EDuration="12.440548291s" podCreationTimestamp="2025-11-23 23:05:35 +0000 UTC" firstStartedPulling="2025-11-23 23:05:35.511621489 +0000 UTC m=+21.356327435" lastFinishedPulling="2025-11-23 23:05:46.822876232 +0000 UTC m=+32.667582178" observedRunningTime="2025-11-23 23:05:47.428926967 +0000 UTC m=+33.273632913" watchObservedRunningTime="2025-11-23 23:05:47.440548291 +0000 UTC m=+33.285254317" Nov 23 23:05:47.492831 systemd[1]: Created slice kubepods-besteffort-pod31a178cc_f6e2_4156_8d63_c50d6b225cdb.slice - libcontainer container kubepods-besteffort-pod31a178cc_f6e2_4156_8d63_c50d6b225cdb.slice. Nov 23 23:05:47.518879 kubelet[2682]: I1123 23:05:47.518829 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztm4j\" (UniqueName: \"kubernetes.io/projected/31a178cc-f6e2-4156-8d63-c50d6b225cdb-kube-api-access-ztm4j\") pod \"whisker-84d779b554-zljcz\" (UID: \"31a178cc-f6e2-4156-8d63-c50d6b225cdb\") " pod="calico-system/whisker-84d779b554-zljcz" Nov 23 23:05:47.518879 kubelet[2682]: I1123 23:05:47.518887 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31a178cc-f6e2-4156-8d63-c50d6b225cdb-whisker-backend-key-pair\") pod \"whisker-84d779b554-zljcz\" (UID: \"31a178cc-f6e2-4156-8d63-c50d6b225cdb\") " pod="calico-system/whisker-84d779b554-zljcz" Nov 23 23:05:47.519052 kubelet[2682]: I1123 23:05:47.518912 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a178cc-f6e2-4156-8d63-c50d6b225cdb-whisker-ca-bundle\") pod \"whisker-84d779b554-zljcz\" (UID: \"31a178cc-f6e2-4156-8d63-c50d6b225cdb\") " pod="calico-system/whisker-84d779b554-zljcz" Nov 23 23:05:47.609839 systemd[1]: var-lib-kubelet-pods-d359e43e\x2de2d0\x2d4171\x2d8ab1\x2df199934fc8c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxxd4h.mount: Deactivated successfully. Nov 23 23:05:47.609946 systemd[1]: var-lib-kubelet-pods-d359e43e\x2de2d0\x2d4171\x2d8ab1\x2df199934fc8c6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:05:47.797632 containerd[1523]: time="2025-11-23T23:05:47.797589929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d779b554-zljcz,Uid:31a178cc-f6e2-4156-8d63-c50d6b225cdb,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:47.970577 systemd-networkd[1440]: cali532ee3d283a: Link UP Nov 23 23:05:47.971146 systemd-networkd[1440]: cali532ee3d283a: Gained carrier Nov 23 23:05:47.992544 kubelet[2682]: I1123 23:05:47.992461 2682 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:05:47.993410 kubelet[2682]: E1123 23:05:47.993291 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:47.998654 containerd[1523]: 2025-11-23 23:05:47.820 [INFO][3881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:05:47.998654 containerd[1523]: 2025-11-23 23:05:47.858 [INFO][3881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84d779b554--zljcz-eth0 whisker-84d779b554- calico-system 31a178cc-f6e2-4156-8d63-c50d6b225cdb 949 0 2025-11-23 23:05:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84d779b554 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84d779b554-zljcz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali532ee3d283a [] [] }} ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-" Nov 23 23:05:47.998654 containerd[1523]: 2025-11-23 23:05:47.858 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:47.998654 containerd[1523]: 2025-11-23 23:05:47.922 [INFO][3896] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" HandleID="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Workload="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.922 [INFO][3896] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" HandleID="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Workload="localhost-k8s-whisker--84d779b554--zljcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137e50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84d779b554-zljcz", "timestamp":"2025-11-23 23:05:47.922079978 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.922 [INFO][3896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.922 [INFO][3896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.922 [INFO][3896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.933 [INFO][3896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" host="localhost" Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.939 [INFO][3896] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.944 [INFO][3896] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.946 [INFO][3896] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.948 [INFO][3896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:47.999103 containerd[1523]: 2025-11-23 23:05:47.949 [INFO][3896] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" host="localhost" Nov 23 23:05:47.999294 containerd[1523]: 2025-11-23 23:05:47.950 [INFO][3896] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670 Nov 23 23:05:47.999294 containerd[1523]: 2025-11-23 23:05:47.954 [INFO][3896] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" host="localhost" Nov 23 23:05:47.999294 containerd[1523]: 2025-11-23 23:05:47.961 [INFO][3896] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" host="localhost" Nov 23 23:05:47.999294 containerd[1523]: 2025-11-23 23:05:47.961 [INFO][3896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" host="localhost" Nov 23 23:05:47.999294 containerd[1523]: 2025-11-23 23:05:47.961 [INFO][3896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:47.999294 containerd[1523]: 2025-11-23 23:05:47.961 [INFO][3896] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" HandleID="k8s-pod-network.d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Workload="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:47.999414 containerd[1523]: 2025-11-23 23:05:47.964 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84d779b554--zljcz-eth0", GenerateName:"whisker-84d779b554-", Namespace:"calico-system", SelfLink:"", UID:"31a178cc-f6e2-4156-8d63-c50d6b225cdb", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d779b554", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84d779b554-zljcz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali532ee3d283a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:47.999414 containerd[1523]: 2025-11-23 23:05:47.964 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:47.999477 containerd[1523]: 2025-11-23 23:05:47.964 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali532ee3d283a ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:47.999477 containerd[1523]: 2025-11-23 23:05:47.971 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:47.999531 containerd[1523]: 2025-11-23 23:05:47.971 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84d779b554--zljcz-eth0", GenerateName:"whisker-84d779b554-", Namespace:"calico-system", SelfLink:"", UID:"31a178cc-f6e2-4156-8d63-c50d6b225cdb", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d779b554", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670", Pod:"whisker-84d779b554-zljcz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali532ee3d283a", MAC:"06:6a:6a:ac:10:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:47.999584 containerd[1523]: 2025-11-23 23:05:47.990 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" Namespace="calico-system" Pod="whisker-84d779b554-zljcz" WorkloadEndpoint="localhost-k8s-whisker--84d779b554--zljcz-eth0" Nov 23 23:05:48.077318 containerd[1523]: time="2025-11-23T23:05:48.077272073Z" level=info msg="connecting to shim d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670" address="unix:///run/containerd/s/23083b1bb2f8ba408cb0fc3aa1b84953122de77516fe4f1d7084bd8f999d4e03" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:48.099711 systemd[1]: Started cri-containerd-d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670.scope - libcontainer container d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670. Nov 23 23:05:48.110559 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:48.131993 containerd[1523]: time="2025-11-23T23:05:48.131949061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d779b554-zljcz,Uid:31a178cc-f6e2-4156-8d63-c50d6b225cdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8bf2408c0844ffb7a40393dd2e42047c43684e85ea63e384062c1fc6f725670\"" Nov 23 23:05:48.133663 containerd[1523]: time="2025-11-23T23:05:48.133627233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:05:48.260284 kubelet[2682]: I1123 23:05:48.260164 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d359e43e-e2d0-4171-8ab1-f199934fc8c6" path="/var/lib/kubelet/pods/d359e43e-e2d0-4171-8ab1-f199934fc8c6/volumes" Nov 23 23:05:48.314304 containerd[1523]: time="2025-11-23T23:05:48.314251962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:48.315275 containerd[1523]: time="2025-11-23T23:05:48.315236654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:05:48.315349 containerd[1523]: time="2025-11-23T23:05:48.315315107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:05:48.315469 kubelet[2682]: E1123 23:05:48.315435 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:05:48.315752 kubelet[2682]: E1123 23:05:48.315486 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:05:48.317216 kubelet[2682]: E1123 23:05:48.317153 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8a9e35afce540048647bad466d551b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztm4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d779b554-zljcz_calico-system(31a178cc-f6e2-4156-8d63-c50d6b225cdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:48.320585 containerd[1523]: time="2025-11-23T23:05:48.320537455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:05:48.402143 kubelet[2682]: E1123 23:05:48.402116 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:48.402571 kubelet[2682]: E1123 23:05:48.402543 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:48.523199 containerd[1523]: time="2025-11-23T23:05:48.521650108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:48.578400 containerd[1523]: time="2025-11-23T23:05:48.578336365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:05:48.578703 containerd[1523]: time="2025-11-23T23:05:48.578401256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:05:48.579735 kubelet[2682]: E1123 23:05:48.579661 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:05:48.579866 kubelet[2682]: E1123 23:05:48.579737 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:05:48.579982 kubelet[2682]: E1123 23:05:48.579897 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztm4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d779b554-zljcz_calico-system(31a178cc-f6e2-4156-8d63-c50d6b225cdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:48.581112 kubelet[2682]: E1123 23:05:48.581073 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d779b554-zljcz" podUID="31a178cc-f6e2-4156-8d63-c50d6b225cdb" Nov 23 23:05:48.893349 systemd-networkd[1440]: vxlan.calico: Link UP Nov 23 23:05:48.893357 systemd-networkd[1440]: vxlan.calico: Gained carrier Nov 23 23:05:49.411330 kubelet[2682]: E1123 23:05:49.411289 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:49.413073 kubelet[2682]: E1123 23:05:49.412961 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d779b554-zljcz" podUID="31a178cc-f6e2-4156-8d63-c50d6b225cdb" Nov 23 23:05:49.461623 systemd-networkd[1440]: cali532ee3d283a: Gained IPv6LL Nov 23 23:05:50.549680 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Nov 23 23:05:54.565131 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:56016.service - OpenSSH per-connection server daemon (10.0.0.1:56016). Nov 23 23:05:54.646237 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 56016 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:05:54.648024 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:54.658008 systemd-logind[1494]: New session 8 of user core. Nov 23 23:05:54.668771 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:05:54.806919 sshd[4231]: Connection closed by 10.0.0.1 port 56016 Nov 23 23:05:54.807539 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:54.811442 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:56016.service: Deactivated successfully. Nov 23 23:05:54.815272 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:05:54.817828 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:05:54.819379 systemd-logind[1494]: Removed session 8. Nov 23 23:05:57.258414 kubelet[2682]: E1123 23:05:57.258212 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:57.258795 containerd[1523]: time="2025-11-23T23:05:57.258654827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mghm,Uid:b8e65264-2f6f-4444-a160-2c94126a2e6d,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:57.258795 containerd[1523]: time="2025-11-23T23:05:57.258654907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-zwbsd,Uid:4987122a-6b7a-47b0-a501-e31f8cdc6bd8,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:05:57.258986 containerd[1523]: time="2025-11-23T23:05:57.258654827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f95fbbd5-bxqch,Uid:9258d3b7-8de8-4b94-bd45-9195727d4ddb,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:05:57.423675 systemd-networkd[1440]: cali75b2d431a0a: Link UP Nov 23 23:05:57.424103 systemd-networkd[1440]: cali75b2d431a0a: Gained carrier Nov 23 23:05:57.440045 containerd[1523]: 2025-11-23 23:05:57.335 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2mghm-eth0 coredns-668d6bf9bc- kube-system b8e65264-2f6f-4444-a160-2c94126a2e6d 884 0 2025-11-23 23:05:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2mghm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75b2d431a0a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-" Nov 23 23:05:57.440045 containerd[1523]: 2025-11-23 23:05:57.335 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.440045 containerd[1523]: 2025-11-23 23:05:57.371 [INFO][4288] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" HandleID="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Workload="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.371 [INFO][4288] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" HandleID="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Workload="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2mghm", "timestamp":"2025-11-23 23:05:57.371002587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.371 [INFO][4288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.371 [INFO][4288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.371 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.386 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" host="localhost" Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.394 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.400 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.402 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.405 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:57.440275 containerd[1523]: 2025-11-23 23:05:57.405 [INFO][4288] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" host="localhost" Nov 23 23:05:57.440470 containerd[1523]: 2025-11-23 23:05:57.406 [INFO][4288] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d Nov 23 23:05:57.440470 containerd[1523]: 2025-11-23 23:05:57.410 [INFO][4288] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" host="localhost" Nov 23 23:05:57.440470 containerd[1523]: 2025-11-23 23:05:57.415 [INFO][4288] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" host="localhost" Nov 23 23:05:57.440470 containerd[1523]: 2025-11-23 23:05:57.415 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" host="localhost" Nov 23 23:05:57.440470 containerd[1523]: 2025-11-23 23:05:57.416 [INFO][4288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:57.440470 containerd[1523]: 2025-11-23 23:05:57.416 [INFO][4288] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" HandleID="k8s-pod-network.ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Workload="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.440932 containerd[1523]: 2025-11-23 23:05:57.421 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2mghm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b8e65264-2f6f-4444-a160-2c94126a2e6d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2mghm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75b2d431a0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:57.440994 containerd[1523]: 2025-11-23 23:05:57.421 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.440994 containerd[1523]: 2025-11-23 23:05:57.421 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75b2d431a0a ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.440994 containerd[1523]: 2025-11-23 23:05:57.425 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.441055 containerd[1523]: 2025-11-23 23:05:57.426 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2mghm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b8e65264-2f6f-4444-a160-2c94126a2e6d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d", Pod:"coredns-668d6bf9bc-2mghm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75b2d431a0a", MAC:"56:5c:32:93:98:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:57.441055 containerd[1523]: 2025-11-23 23:05:57.436 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mghm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2mghm-eth0" Nov 23 23:05:57.482086 containerd[1523]: time="2025-11-23T23:05:57.482043528Z" level=info msg="connecting to shim ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d" address="unix:///run/containerd/s/2b63733bf009469e2a72dd7556a4ee4f6789d48b3da8eaa9de5943011c8b7df4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:57.511035 systemd[1]: Started cri-containerd-ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d.scope - libcontainer container ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d. Nov 23 23:05:57.529297 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:57.535065 systemd-networkd[1440]: cali9edf58dc8d9: Link UP Nov 23 23:05:57.537162 systemd-networkd[1440]: cali9edf58dc8d9: Gained carrier Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.339 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0 calico-apiserver-865b864c6b- calico-apiserver 4987122a-6b7a-47b0-a501-e31f8cdc6bd8 888 0 2025-11-23 23:05:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:865b864c6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-865b864c6b-zwbsd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9edf58dc8d9 [] [] }} ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.340 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.376 [INFO][4294] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" HandleID="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Workload="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.376 [INFO][4294] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" HandleID="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Workload="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-865b864c6b-zwbsd", "timestamp":"2025-11-23 23:05:57.376047402 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.376 [INFO][4294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.416 [INFO][4294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.416 [INFO][4294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.488 [INFO][4294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.496 [INFO][4294] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.504 [INFO][4294] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.507 [INFO][4294] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.510 [INFO][4294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.510 [INFO][4294] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.512 [INFO][4294] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.520 [INFO][4294] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.529 [INFO][4294] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.529 [INFO][4294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" host="localhost" Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.529 [INFO][4294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:57.562592 containerd[1523]: 2025-11-23 23:05:57.529 [INFO][4294] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" HandleID="k8s-pod-network.2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Workload="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.564669 containerd[1523]: 2025-11-23 23:05:57.531 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0", GenerateName:"calico-apiserver-865b864c6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"4987122a-6b7a-47b0-a501-e31f8cdc6bd8", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b864c6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-865b864c6b-zwbsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9edf58dc8d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:57.564669 containerd[1523]: 2025-11-23 23:05:57.532 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.564669 containerd[1523]: 2025-11-23 23:05:57.532 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9edf58dc8d9 ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.564669 containerd[1523]: 2025-11-23 23:05:57.535 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.564669 containerd[1523]: 2025-11-23 23:05:57.536 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0", GenerateName:"calico-apiserver-865b864c6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"4987122a-6b7a-47b0-a501-e31f8cdc6bd8", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b864c6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be", Pod:"calico-apiserver-865b864c6b-zwbsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9edf58dc8d9", MAC:"8a:e4:f9:df:e5:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:57.564669 containerd[1523]: 2025-11-23 23:05:57.555 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-zwbsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--zwbsd-eth0" Nov 23 23:05:57.579348 containerd[1523]: time="2025-11-23T23:05:57.579291207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mghm,Uid:b8e65264-2f6f-4444-a160-2c94126a2e6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d\"" Nov 23 23:05:57.580134 kubelet[2682]: E1123 23:05:57.580114 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:57.590027 containerd[1523]: time="2025-11-23T23:05:57.589986441Z" level=info msg="CreateContainer within sandbox \"ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:05:57.599767 containerd[1523]: time="2025-11-23T23:05:57.599725623Z" level=info msg="connecting to shim 2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be" address="unix:///run/containerd/s/f4b0003b2944afe227da6a1b3044af8ee959a0af01dd508899d5a1f7aec713fe" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:57.602168 containerd[1523]: time="2025-11-23T23:05:57.602135155Z" level=info msg="Container 55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:57.609801 containerd[1523]: time="2025-11-23T23:05:57.609692356Z" level=info msg="CreateContainer within sandbox \"ecc203e6e627015cefd432ac4784f568273e0a93425e39de0d2ae02a6ac49a2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436\"" Nov 23 23:05:57.610722 containerd[1523]: time="2025-11-23T23:05:57.610691014Z" level=info msg="StartContainer for \"55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436\"" Nov 23 23:05:57.613541 containerd[1523]: time="2025-11-23T23:05:57.613510442Z" level=info msg="connecting to shim 55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436" address="unix:///run/containerd/s/2b63733bf009469e2a72dd7556a4ee4f6789d48b3da8eaa9de5943011c8b7df4" protocol=ttrpc version=3 Nov 23 23:05:57.627711 systemd[1]: Started cri-containerd-2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be.scope - libcontainer container 2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be. Nov 23 23:05:57.631780 systemd-networkd[1440]: cali8f4d5af5a83: Link UP Nov 23 23:05:57.632578 systemd-networkd[1440]: cali8f4d5af5a83: Gained carrier Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.341 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0 calico-apiserver-57f95fbbd5- calico-apiserver 9258d3b7-8de8-4b94-bd45-9195727d4ddb 886 0 2025-11-23 23:05:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57f95fbbd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57f95fbbd5-bxqch eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f4d5af5a83 [] [] }} ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.342 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.388 [INFO][4300] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" HandleID="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Workload="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.388 [INFO][4300] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" HandleID="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Workload="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57f95fbbd5-bxqch", "timestamp":"2025-11-23 23:05:57.388120986 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.388 [INFO][4300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.529 [INFO][4300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.529 [INFO][4300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.589 [INFO][4300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.596 [INFO][4300] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.605 [INFO][4300] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.608 [INFO][4300] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.611 [INFO][4300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.611 [INFO][4300] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.613 [INFO][4300] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02 Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.618 [INFO][4300] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.625 [INFO][4300] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.625 [INFO][4300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" host="localhost" Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.625 [INFO][4300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:57.651155 containerd[1523]: 2025-11-23 23:05:57.625 [INFO][4300] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" HandleID="k8s-pod-network.2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Workload="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.651935 containerd[1523]: 2025-11-23 23:05:57.629 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0", GenerateName:"calico-apiserver-57f95fbbd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9258d3b7-8de8-4b94-bd45-9195727d4ddb", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f95fbbd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57f95fbbd5-bxqch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f4d5af5a83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:57.651935 containerd[1523]: 2025-11-23 23:05:57.629 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.651935 containerd[1523]: 2025-11-23 23:05:57.629 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f4d5af5a83 ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.651935 containerd[1523]: 2025-11-23 23:05:57.632 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.651935 containerd[1523]: 2025-11-23 23:05:57.633 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0", GenerateName:"calico-apiserver-57f95fbbd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9258d3b7-8de8-4b94-bd45-9195727d4ddb", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57f95fbbd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02", Pod:"calico-apiserver-57f95fbbd5-bxqch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f4d5af5a83", MAC:"9a:48:2e:6f:a9:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:57.651935 containerd[1523]: 2025-11-23 23:05:57.644 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" Namespace="calico-apiserver" Pod="calico-apiserver-57f95fbbd5-bxqch" WorkloadEndpoint="localhost-k8s-calico--apiserver--57f95fbbd5--bxqch-eth0" Nov 23 23:05:57.653264 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:57.659707 systemd[1]: Started cri-containerd-55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436.scope - libcontainer container 55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436. Nov 23 23:05:57.690954 containerd[1523]: time="2025-11-23T23:05:57.690804333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-zwbsd,Uid:4987122a-6b7a-47b0-a501-e31f8cdc6bd8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ac727a0e671419246f89e02fc58af3fc22d0ac7b993be8673e9acbc672315be\"" Nov 23 23:05:57.692931 containerd[1523]: time="2025-11-23T23:05:57.692902582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:57.701068 containerd[1523]: time="2025-11-23T23:05:57.701021701Z" level=info msg="connecting to shim 2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02" address="unix:///run/containerd/s/23dd34602de590e2180dbb4c3537b9dd6b0e3fd13b4cbca8b507efb04b6ac31c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:57.712936 containerd[1523]: time="2025-11-23T23:05:57.712873454Z" level=info msg="StartContainer for \"55d1f2a389c5e76023d0c876f800e16dd0325b930c35a5c64c2bcd5ee64f8436\" returns successfully" Nov 23 23:05:57.730740 systemd[1]: Started cri-containerd-2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02.scope - libcontainer container 2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02. Nov 23 23:05:57.744130 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:57.772617 containerd[1523]: time="2025-11-23T23:05:57.772457624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57f95fbbd5-bxqch,Uid:9258d3b7-8de8-4b94-bd45-9195727d4ddb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2461fa996783889b704cdaa2f8ecbaceb3a7c70d07eced324f1b51917759ec02\"" Nov 23 23:05:57.899255 containerd[1523]: time="2025-11-23T23:05:57.899194287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:57.900273 containerd[1523]: time="2025-11-23T23:05:57.900191904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:05:57.900273 containerd[1523]: time="2025-11-23T23:05:57.900236790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:57.900490 kubelet[2682]: E1123 23:05:57.900447 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:57.900576 kubelet[2682]: E1123 23:05:57.900513 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:57.900869 kubelet[2682]: E1123 23:05:57.900776 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54kxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-865b864c6b-zwbsd_calico-apiserver(4987122a-6b7a-47b0-a501-e31f8cdc6bd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:57.900971 containerd[1523]: time="2025-11-23T23:05:57.900916484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:57.901987 kubelet[2682]: E1123 23:05:57.901939 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" podUID="4987122a-6b7a-47b0-a501-e31f8cdc6bd8" Nov 23 23:05:58.113789 containerd[1523]: time="2025-11-23T23:05:58.113664677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:58.114794 containerd[1523]: time="2025-11-23T23:05:58.114732581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:05:58.114906 containerd[1523]: time="2025-11-23T23:05:58.114811392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:58.114993 kubelet[2682]: E1123 23:05:58.114938 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:58.114993 kubelet[2682]: E1123 23:05:58.114990 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:58.115168 kubelet[2682]: E1123 23:05:58.115122 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdnxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57f95fbbd5-bxqch_calico-apiserver(9258d3b7-8de8-4b94-bd45-9195727d4ddb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:58.117382 kubelet[2682]: E1123 23:05:58.117204 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" podUID="9258d3b7-8de8-4b94-bd45-9195727d4ddb" Nov 23 23:05:58.442601 kubelet[2682]: E1123 23:05:58.442451 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" podUID="4987122a-6b7a-47b0-a501-e31f8cdc6bd8" Nov 23 23:05:58.443934 kubelet[2682]: E1123 23:05:58.443852 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:58.448754 kubelet[2682]: E1123 23:05:58.448697 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" podUID="9258d3b7-8de8-4b94-bd45-9195727d4ddb" Nov 23 23:05:58.492517 kubelet[2682]: I1123 23:05:58.492373 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2mghm" podStartSLOduration=38.492355772 podStartE2EDuration="38.492355772s" podCreationTimestamp="2025-11-23 23:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:58.474780841 +0000 UTC m=+44.319487107" watchObservedRunningTime="2025-11-23 23:05:58.492355772 +0000 UTC m=+44.337061718" Nov 23 23:05:58.805667 systemd-networkd[1440]: cali75b2d431a0a: Gained IPv6LL Nov 23 23:05:59.253877 systemd-networkd[1440]: cali8f4d5af5a83: Gained IPv6LL Nov 23 23:05:59.260514 containerd[1523]: time="2025-11-23T23:05:59.258992678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9f955d8c-k5vtv,Uid:34a9e722-c2ac-40a8-8496-e24ef8260bba,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:59.260514 containerd[1523]: time="2025-11-23T23:05:59.259989370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbtpf,Uid:583c9149-22f9-45c3-9bb3-3e5f60548c49,Namespace:calico-system,Attempt:0,}" Nov 23 23:05:59.260514 containerd[1523]: time="2025-11-23T23:05:59.260005092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nv6cn,Uid:6ab77788-9835-4030-a5a9-a00cf34b7381,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:59.260996 kubelet[2682]: E1123 23:05:59.259180 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:59.261782 containerd[1523]: time="2025-11-23T23:05:59.261739362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-p6bp2,Uid:bab07697-3b96-415b-b1d7-632329a49d75,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:05:59.450224 kubelet[2682]: E1123 23:05:59.450182 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:59.452743 kubelet[2682]: E1123 23:05:59.452613 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" podUID="4987122a-6b7a-47b0-a501-e31f8cdc6bd8" Nov 23 23:05:59.468394 kubelet[2682]: E1123 23:05:59.466791 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" podUID="9258d3b7-8de8-4b94-bd45-9195727d4ddb" Nov 23 23:05:59.479283 systemd-networkd[1440]: cali47824cad035: Link UP Nov 23 23:05:59.479974 systemd-networkd[1440]: cali47824cad035: Gained carrier Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.359 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0 calico-kube-controllers-6f9f955d8c- calico-system 34a9e722-c2ac-40a8-8496-e24ef8260bba 885 0 2025-11-23 23:05:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f9f955d8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f9f955d8c-k5vtv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali47824cad035 [] [] }} ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.359 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.414 [INFO][4590] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" HandleID="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Workload="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.414 [INFO][4590] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" HandleID="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Workload="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f9f955d8c-k5vtv", "timestamp":"2025-11-23 23:05:59.414417192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.414 [INFO][4590] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.414 [INFO][4590] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.414 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.431 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.438 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.445 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.448 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.454 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.454 [INFO][4590] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.456 [INFO][4590] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782 Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.464 [INFO][4590] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.471 [INFO][4590] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.471 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" host="localhost" Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.471 [INFO][4590] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:59.502449 containerd[1523]: 2025-11-23 23:05:59.471 [INFO][4590] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" HandleID="k8s-pod-network.462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Workload="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.503671 containerd[1523]: 2025-11-23 23:05:59.475 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0", GenerateName:"calico-kube-controllers-6f9f955d8c-", Namespace:"calico-system", SelfLink:"", UID:"34a9e722-c2ac-40a8-8496-e24ef8260bba", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9f955d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f9f955d8c-k5vtv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47824cad035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.503671 containerd[1523]: 2025-11-23 23:05:59.475 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.503671 containerd[1523]: 2025-11-23 23:05:59.475 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47824cad035 ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.503671 containerd[1523]: 2025-11-23 23:05:59.477 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.503671 containerd[1523]: 2025-11-23 23:05:59.477 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0", GenerateName:"calico-kube-controllers-6f9f955d8c-", Namespace:"calico-system", SelfLink:"", UID:"34a9e722-c2ac-40a8-8496-e24ef8260bba", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9f955d8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782", Pod:"calico-kube-controllers-6f9f955d8c-k5vtv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47824cad035", MAC:"36:13:b1:a6:b2:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.503671 containerd[1523]: 2025-11-23 23:05:59.499 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" Namespace="calico-system" Pod="calico-kube-controllers-6f9f955d8c-k5vtv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9f955d8c--k5vtv-eth0" Nov 23 23:05:59.509747 systemd-networkd[1440]: cali9edf58dc8d9: Gained IPv6LL Nov 23 23:05:59.533622 containerd[1523]: time="2025-11-23T23:05:59.533410647Z" level=info msg="connecting to shim 462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782" address="unix:///run/containerd/s/76468cdb29fea7f1899d6b8c0a6048e3ab2c1984b01f4baf36b7785c5fd0423a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:59.560774 systemd[1]: Started cri-containerd-462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782.scope - libcontainer container 462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782. Nov 23 23:05:59.580741 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:59.585044 systemd-networkd[1440]: cali7a8db9c3b12: Link UP Nov 23 23:05:59.587698 systemd-networkd[1440]: cali7a8db9c3b12: Gained carrier Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.373 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0 calico-apiserver-865b864c6b- calico-apiserver bab07697-3b96-415b-b1d7-632329a49d75 889 0 2025-11-23 23:05:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:865b864c6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-865b864c6b-p6bp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a8db9c3b12 [] [] }} ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.373 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.421 [INFO][4602] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" HandleID="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Workload="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.421 [INFO][4602] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" HandleID="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Workload="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118ed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-865b864c6b-p6bp2", "timestamp":"2025-11-23 23:05:59.421258616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.421 [INFO][4602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.472 [INFO][4602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.472 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.532 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.540 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.547 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.550 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.554 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.554 [INFO][4602] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.558 [INFO][4602] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9 Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.565 [INFO][4602] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.574 [INFO][4602] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.574 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" host="localhost" Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.574 [INFO][4602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:59.608232 containerd[1523]: 2025-11-23 23:05:59.574 [INFO][4602] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" HandleID="k8s-pod-network.5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Workload="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.608787 containerd[1523]: 2025-11-23 23:05:59.581 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0", GenerateName:"calico-apiserver-865b864c6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bab07697-3b96-415b-b1d7-632329a49d75", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b864c6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-865b864c6b-p6bp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a8db9c3b12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.608787 containerd[1523]: 2025-11-23 23:05:59.581 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.608787 containerd[1523]: 2025-11-23 23:05:59.581 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a8db9c3b12 ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.608787 containerd[1523]: 2025-11-23 23:05:59.586 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.608787 containerd[1523]: 2025-11-23 23:05:59.588 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0", GenerateName:"calico-apiserver-865b864c6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bab07697-3b96-415b-b1d7-632329a49d75", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b864c6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9", Pod:"calico-apiserver-865b864c6b-p6bp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a8db9c3b12", MAC:"5a:cf:5f:ae:fb:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.608787 containerd[1523]: 2025-11-23 23:05:59.603 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b864c6b-p6bp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--865b864c6b--p6bp2-eth0" Nov 23 23:05:59.620809 containerd[1523]: time="2025-11-23T23:05:59.620764159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9f955d8c-k5vtv,Uid:34a9e722-c2ac-40a8-8496-e24ef8260bba,Namespace:calico-system,Attempt:0,} returns sandbox id \"462394078461dc8452f744c7ae22e8e781a95097533d99aae78f5a2377ab9782\"" Nov 23 23:05:59.622790 containerd[1523]: time="2025-11-23T23:05:59.622743620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:05:59.638909 containerd[1523]: time="2025-11-23T23:05:59.638673287Z" level=info msg="connecting to shim 5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9" address="unix:///run/containerd/s/5c2baaf27374c65caedc060156ad53cad7738bffaa2af0b972bf24e67cc0cf0a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:59.671702 systemd[1]: Started cri-containerd-5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9.scope - libcontainer container 5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9. Nov 23 23:05:59.676374 systemd-networkd[1440]: cali26d5279013c: Link UP Nov 23 23:05:59.676846 systemd-networkd[1440]: cali26d5279013c: Gained carrier Nov 23 23:05:59.690754 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.370 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mbtpf-eth0 goldmane-666569f655- calico-system 583c9149-22f9-45c3-9bb3-3e5f60548c49 883 0 2025-11-23 23:05:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mbtpf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali26d5279013c [] [] }} ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.371 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.425 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" HandleID="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Workload="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.427 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" HandleID="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Workload="localhost-k8s-goldmane--666569f655--mbtpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a14f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mbtpf", "timestamp":"2025-11-23 23:05:59.425776294 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.427 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.574 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.576 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.632 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.640 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.647 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.649 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.654 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.654 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.656 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.661 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.668 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.669 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" host="localhost" Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.669 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:59.697727 containerd[1523]: 2025-11-23 23:05:59.669 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" HandleID="k8s-pod-network.36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Workload="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.699244 containerd[1523]: 2025-11-23 23:05:59.672 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mbtpf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"583c9149-22f9-45c3-9bb3-3e5f60548c49", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mbtpf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26d5279013c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.699244 containerd[1523]: 2025-11-23 23:05:59.672 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.699244 containerd[1523]: 2025-11-23 23:05:59.673 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26d5279013c ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.699244 containerd[1523]: 2025-11-23 23:05:59.677 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.699244 containerd[1523]: 2025-11-23 23:05:59.677 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mbtpf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"583c9149-22f9-45c3-9bb3-3e5f60548c49", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b", Pod:"goldmane-666569f655-mbtpf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26d5279013c", MAC:"ae:c5:a4:fd:76:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.699244 containerd[1523]: 2025-11-23 23:05:59.695 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" Namespace="calico-system" Pod="goldmane-666569f655-mbtpf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbtpf-eth0" Nov 23 23:05:59.725264 containerd[1523]: time="2025-11-23T23:05:59.725209770Z" level=info msg="connecting to shim 36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b" address="unix:///run/containerd/s/4803a00b93ad6f754de4120d08500a664e948ad7bf3e1377acc19bb77fe1bed9" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:59.730369 containerd[1523]: time="2025-11-23T23:05:59.730327367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b864c6b-p6bp2,Uid:bab07697-3b96-415b-b1d7-632329a49d75,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ca4f69e9e85aef38a66868e95b12d23d83e523961429071303d7bfdc5433bd9\"" Nov 23 23:05:59.778622 systemd[1]: Started cri-containerd-36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b.scope - libcontainer container 36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b. Nov 23 23:05:59.802577 systemd-networkd[1440]: caliee6e41d7380: Link UP Nov 23 23:05:59.803424 systemd-networkd[1440]: caliee6e41d7380: Gained carrier Nov 23 23:05:59.822399 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:51038.service - OpenSSH per-connection server daemon (10.0.0.1:51038). Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.381 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0 coredns-668d6bf9bc- kube-system 6ab77788-9835-4030-a5a9-a00cf34b7381 887 0 2025-11-23 23:05:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-nv6cn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee6e41d7380 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.382 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.440 [INFO][4613] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" HandleID="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Workload="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.440 [INFO][4613] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" HandleID="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Workload="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c260), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-nv6cn", "timestamp":"2025-11-23 23:05:59.44011355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.440 [INFO][4613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.669 [INFO][4613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.669 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.734 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.748 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.758 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.761 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.766 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.767 [INFO][4613] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.769 [INFO][4613] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466 Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.782 [INFO][4613] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.794 [INFO][4613] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.794 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" host="localhost" Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.794 [INFO][4613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:05:59.825037 containerd[1523]: 2025-11-23 23:05:59.794 [INFO][4613] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" HandleID="k8s-pod-network.0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Workload="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.825560 containerd[1523]: 2025-11-23 23:05:59.798 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6ab77788-9835-4030-a5a9-a00cf34b7381", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-nv6cn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee6e41d7380", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.825560 containerd[1523]: 2025-11-23 23:05:59.798 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.825560 containerd[1523]: 2025-11-23 23:05:59.798 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee6e41d7380 ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.825560 containerd[1523]: 2025-11-23 23:05:59.803 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.825560 containerd[1523]: 2025-11-23 23:05:59.804 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6ab77788-9835-4030-a5a9-a00cf34b7381", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466", Pod:"coredns-668d6bf9bc-nv6cn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee6e41d7380", MAC:"6e:2e:a2:7a:9b:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:05:59.825560 containerd[1523]: 2025-11-23 23:05:59.816 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" Namespace="kube-system" Pod="coredns-668d6bf9bc-nv6cn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nv6cn-eth0" Nov 23 23:05:59.830722 containerd[1523]: time="2025-11-23T23:05:59.830677397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:59.833117 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:59.835016 containerd[1523]: time="2025-11-23T23:05:59.834956803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:05:59.835320 containerd[1523]: time="2025-11-23T23:05:59.835231799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:05:59.835419 kubelet[2682]: E1123 23:05:59.835361 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:05:59.835419 kubelet[2682]: E1123 23:05:59.835414 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:05:59.836065 kubelet[2682]: E1123 23:05:59.835909 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5gts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f9f955d8c-k5vtv_calico-system(34a9e722-c2ac-40a8-8496-e24ef8260bba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:59.837344 kubelet[2682]: E1123 23:05:59.837296 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" podUID="34a9e722-c2ac-40a8-8496-e24ef8260bba" Nov 23 23:05:59.838786 containerd[1523]: time="2025-11-23T23:05:59.838193511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:59.865251 containerd[1523]: time="2025-11-23T23:05:59.865205083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbtpf,Uid:583c9149-22f9-45c3-9bb3-3e5f60548c49,Namespace:calico-system,Attempt:0,} returns sandbox id \"36d9887fcc030b7cdf81b24fea27c890f5103685b3c5ee66f85731ceaf4f5e0b\"" Nov 23 23:05:59.879601 containerd[1523]: time="2025-11-23T23:05:59.878684946Z" level=info msg="connecting to shim 0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466" address="unix:///run/containerd/s/4ca8e09e465d93bd850968b5eac8c816136020aff689106b81dd97dc89daea54" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:59.902641 sshd[4795]: Accepted publickey for core from 10.0.0.1 port 51038 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:05:59.904316 sshd-session[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:59.904893 systemd[1]: Started cri-containerd-0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466.scope - libcontainer container 0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466. Nov 23 23:05:59.909773 systemd-logind[1494]: New session 9 of user core. Nov 23 23:05:59.911142 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:05:59.921477 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:05:59.950716 containerd[1523]: time="2025-11-23T23:05:59.950668705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nv6cn,Uid:6ab77788-9835-4030-a5a9-a00cf34b7381,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466\"" Nov 23 23:05:59.951713 kubelet[2682]: E1123 23:05:59.951680 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:05:59.954012 containerd[1523]: time="2025-11-23T23:05:59.953965101Z" level=info msg="CreateContainer within sandbox \"0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:05:59.966540 containerd[1523]: time="2025-11-23T23:05:59.966293451Z" level=info msg="Container baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:59.972931 containerd[1523]: time="2025-11-23T23:05:59.972884522Z" level=info msg="CreateContainer within sandbox \"0a7a1352dcd098a677cbc87d57fe8ea9e3f0da372bca0cef2a192b10e5307466\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a\"" Nov 23 23:05:59.973974 containerd[1523]: time="2025-11-23T23:05:59.973927860Z" level=info msg="StartContainer for \"baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a\"" Nov 23 23:05:59.975530 containerd[1523]: time="2025-11-23T23:05:59.975279519Z" level=info msg="connecting to shim baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a" address="unix:///run/containerd/s/4ca8e09e465d93bd850968b5eac8c816136020aff689106b81dd97dc89daea54" protocol=ttrpc version=3 Nov 23 23:06:00.005743 systemd[1]: Started cri-containerd-baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a.scope - libcontainer container baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a. Nov 23 23:06:00.041157 containerd[1523]: time="2025-11-23T23:06:00.040995150Z" level=info msg="StartContainer for \"baf4d6764225689b88b17c336b4b3d045fb39a3c8867551804376655c8045d9a\" returns successfully" Nov 23 23:06:00.058710 containerd[1523]: time="2025-11-23T23:06:00.058667402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:00.060176 containerd[1523]: time="2025-11-23T23:06:00.060088987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:00.060176 containerd[1523]: time="2025-11-23T23:06:00.060135073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:00.060377 kubelet[2682]: E1123 23:06:00.060345 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:00.060445 kubelet[2682]: E1123 23:06:00.060402 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:00.060648 kubelet[2682]: E1123 23:06:00.060608 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkj5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-865b864c6b-p6bp2_calico-apiserver(bab07697-3b96-415b-b1d7-632329a49d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:00.061533 containerd[1523]: time="2025-11-23T23:06:00.061490648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:06:00.062576 kubelet[2682]: E1123 23:06:00.062533 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" podUID="bab07697-3b96-415b-b1d7-632329a49d75" Nov 23 23:06:00.104933 sshd[4853]: Connection closed by 10.0.0.1 port 51038 Nov 23 23:06:00.105356 sshd-session[4795]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:00.109479 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:51038.service: Deactivated successfully. Nov 23 23:06:00.111714 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:06:00.112768 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:06:00.116416 systemd-logind[1494]: Removed session 9. Nov 23 23:06:00.259076 containerd[1523]: time="2025-11-23T23:06:00.259000349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdb4s,Uid:06591145-f7c8-4eb9-86a0-ddb163a9822f,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:00.277537 containerd[1523]: time="2025-11-23T23:06:00.276462695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:00.282342 containerd[1523]: time="2025-11-23T23:06:00.281562516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:06:00.282750 containerd[1523]: time="2025-11-23T23:06:00.282464033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:00.282803 kubelet[2682]: E1123 23:06:00.282769 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:00.282845 kubelet[2682]: E1123 23:06:00.282821 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:00.283130 kubelet[2682]: E1123 23:06:00.282960 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9nr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mbtpf_calico-system(583c9149-22f9-45c3-9bb3-3e5f60548c49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:00.284209 kubelet[2682]: E1123 23:06:00.284155 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbtpf" podUID="583c9149-22f9-45c3-9bb3-3e5f60548c49" Nov 23 23:06:00.410105 systemd-networkd[1440]: cali4dcc8fd7ae6: Link UP Nov 23 23:06:00.410776 systemd-networkd[1440]: cali4dcc8fd7ae6: Gained carrier Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.309 [INFO][4907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pdb4s-eth0 csi-node-driver- calico-system 06591145-f7c8-4eb9-86a0-ddb163a9822f 771 0 2025-11-23 23:05:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pdb4s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4dcc8fd7ae6 [] [] }} ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.309 [INFO][4907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.341 [INFO][4921] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" HandleID="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Workload="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.341 [INFO][4921] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" HandleID="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Workload="localhost-k8s-csi--node--driver--pdb4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000510a60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pdb4s", "timestamp":"2025-11-23 23:06:00.341360433 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.341 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.341 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.341 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.355 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.362 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.368 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.371 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.375 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.375 [INFO][4921] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.383 [INFO][4921] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21 Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.390 [INFO][4921] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.401 [INFO][4921] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.401 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" host="localhost" Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.401 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:00.429781 containerd[1523]: 2025-11-23 23:06:00.401 [INFO][4921] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" HandleID="k8s-pod-network.44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Workload="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.430338 containerd[1523]: 2025-11-23 23:06:00.406 [INFO][4907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdb4s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06591145-f7c8-4eb9-86a0-ddb163a9822f", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pdb4s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4dcc8fd7ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:00.430338 containerd[1523]: 2025-11-23 23:06:00.406 [INFO][4907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.430338 containerd[1523]: 2025-11-23 23:06:00.407 [INFO][4907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dcc8fd7ae6 ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.430338 containerd[1523]: 2025-11-23 23:06:00.413 [INFO][4907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.430338 containerd[1523]: 2025-11-23 23:06:00.414 [INFO][4907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pdb4s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06591145-f7c8-4eb9-86a0-ddb163a9822f", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21", Pod:"csi-node-driver-pdb4s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4dcc8fd7ae6", MAC:"5e:0e:3a:1e:80:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:00.430338 containerd[1523]: 2025-11-23 23:06:00.427 [INFO][4907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" Namespace="calico-system" Pod="csi-node-driver-pdb4s" WorkloadEndpoint="localhost-k8s-csi--node--driver--pdb4s-eth0" Nov 23 23:06:00.454965 kubelet[2682]: E1123 23:06:00.454928 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:06:00.464854 containerd[1523]: time="2025-11-23T23:06:00.464794445Z" level=info msg="connecting to shim 44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21" address="unix:///run/containerd/s/1146106565962c342bdba9fa34d1975e3c277a873b77efa7c2ff690edf53923b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:00.470483 kubelet[2682]: E1123 23:06:00.470273 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" podUID="34a9e722-c2ac-40a8-8496-e24ef8260bba" Nov 23 23:06:00.473226 kubelet[2682]: E1123 23:06:00.473164 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:06:00.473773 kubelet[2682]: E1123 23:06:00.473546 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbtpf" podUID="583c9149-22f9-45c3-9bb3-3e5f60548c49" Nov 23 23:06:00.475743 kubelet[2682]: E1123 23:06:00.475274 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" podUID="bab07697-3b96-415b-b1d7-632329a49d75" Nov 23 23:06:00.481526 kubelet[2682]: I1123 23:06:00.480875 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nv6cn" podStartSLOduration=40.480857329 podStartE2EDuration="40.480857329s" podCreationTimestamp="2025-11-23 23:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:06:00.480351503 +0000 UTC m=+46.325057449" watchObservedRunningTime="2025-11-23 23:06:00.480857329 +0000 UTC m=+46.325563235" Nov 23 23:06:00.524159 systemd[1]: Started cri-containerd-44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21.scope - libcontainer container 44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21. Nov 23 23:06:00.549113 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:00.587232 containerd[1523]: time="2025-11-23T23:06:00.587181201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pdb4s,Uid:06591145-f7c8-4eb9-86a0-ddb163a9822f,Namespace:calico-system,Attempt:0,} returns sandbox id \"44ba832810cbb4c2f984709208e068a51cf1fa9f7511f66766afbc4a58e3fd21\"" Nov 23 23:06:00.589722 containerd[1523]: time="2025-11-23T23:06:00.589687526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:06:00.809160 containerd[1523]: time="2025-11-23T23:06:00.809029259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:00.812406 containerd[1523]: time="2025-11-23T23:06:00.812339089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:06:00.812573 containerd[1523]: time="2025-11-23T23:06:00.812402017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:06:00.812694 kubelet[2682]: E1123 23:06:00.812651 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:00.812742 kubelet[2682]: E1123 23:06:00.812708 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:00.812906 kubelet[2682]: E1123 23:06:00.812827 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bjmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pdb4s_calico-system(06591145-f7c8-4eb9-86a0-ddb163a9822f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:00.815621 containerd[1523]: time="2025-11-23T23:06:00.815586590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:06:01.030072 containerd[1523]: time="2025-11-23T23:06:01.030011896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:01.031791 containerd[1523]: time="2025-11-23T23:06:01.031733555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:06:01.031881 containerd[1523]: time="2025-11-23T23:06:01.031856290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:06:01.032478 kubelet[2682]: E1123 23:06:01.032226 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:01.032658 kubelet[2682]: E1123 23:06:01.032630 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:01.033155 kubelet[2682]: E1123 23:06:01.033110 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bjmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pdb4s_calico-system(06591145-f7c8-4eb9-86a0-ddb163a9822f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:01.034545 kubelet[2682]: E1123 23:06:01.034474 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:06:01.237683 systemd-networkd[1440]: cali26d5279013c: Gained IPv6LL Nov 23 23:06:01.265674 containerd[1523]: time="2025-11-23T23:06:01.265333866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:06:01.365704 systemd-networkd[1440]: cali7a8db9c3b12: Gained IPv6LL Nov 23 23:06:01.477177 kubelet[2682]: E1123 23:06:01.476917 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:06:01.478309 kubelet[2682]: E1123 23:06:01.477245 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" podUID="bab07697-3b96-415b-b1d7-632329a49d75" Nov 23 23:06:01.478309 kubelet[2682]: E1123 23:06:01.477405 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbtpf" podUID="583c9149-22f9-45c3-9bb3-3e5f60548c49" Nov 23 23:06:01.478309 kubelet[2682]: E1123 23:06:01.478107 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" podUID="34a9e722-c2ac-40a8-8496-e24ef8260bba" Nov 23 23:06:01.479234 containerd[1523]: time="2025-11-23T23:06:01.478806694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:01.481695 kubelet[2682]: E1123 23:06:01.481623 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:06:01.483676 containerd[1523]: time="2025-11-23T23:06:01.483614106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:06:01.483782 containerd[1523]: time="2025-11-23T23:06:01.483742362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:06:01.483974 kubelet[2682]: E1123 23:06:01.483937 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:01.484267 kubelet[2682]: E1123 23:06:01.484082 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:01.484267 kubelet[2682]: E1123 23:06:01.484211 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8a9e35afce540048647bad466d551b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztm4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d779b554-zljcz_calico-system(31a178cc-f6e2-4156-8d63-c50d6b225cdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:01.486438 containerd[1523]: time="2025-11-23T23:06:01.486400101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:06:01.493696 systemd-networkd[1440]: cali47824cad035: Gained IPv6LL Nov 23 23:06:01.623272 systemd-networkd[1440]: caliee6e41d7380: Gained IPv6LL Nov 23 23:06:01.717043 containerd[1523]: time="2025-11-23T23:06:01.716997309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:01.718765 containerd[1523]: time="2025-11-23T23:06:01.718712968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:06:01.718839 containerd[1523]: time="2025-11-23T23:06:01.718766455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:01.719171 kubelet[2682]: E1123 23:06:01.719083 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:01.719171 kubelet[2682]: E1123 23:06:01.719139 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:01.720053 kubelet[2682]: E1123 23:06:01.719979 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztm4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d779b554-zljcz_calico-system(31a178cc-f6e2-4156-8d63-c50d6b225cdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:01.721210 kubelet[2682]: E1123 23:06:01.721163 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d779b554-zljcz" podUID="31a178cc-f6e2-4156-8d63-c50d6b225cdb" Nov 23 23:06:02.453686 systemd-networkd[1440]: cali4dcc8fd7ae6: Gained IPv6LL Nov 23 23:06:02.478312 kubelet[2682]: E1123 23:06:02.478275 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:06:02.480366 kubelet[2682]: E1123 23:06:02.479631 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:06:05.124387 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:51042.service - OpenSSH per-connection server daemon (10.0.0.1:51042). Nov 23 23:06:05.189745 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 51042 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:05.191804 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:05.198814 systemd-logind[1494]: New session 10 of user core. Nov 23 23:06:05.209666 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:06:05.378397 sshd[4993]: Connection closed by 10.0.0.1 port 51042 Nov 23 23:06:05.378667 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:05.390022 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:51042.service: Deactivated successfully. Nov 23 23:06:05.392948 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:06:05.395280 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:06:05.398765 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:51048.service - OpenSSH per-connection server daemon (10.0.0.1:51048). Nov 23 23:06:05.401114 systemd-logind[1494]: Removed session 10. Nov 23 23:06:05.477441 sshd[5007]: Accepted publickey for core from 10.0.0.1 port 51048 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:05.480621 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:05.494003 systemd-logind[1494]: New session 11 of user core. Nov 23 23:06:05.500022 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:06:05.732923 sshd[5012]: Connection closed by 10.0.0.1 port 51048 Nov 23 23:06:05.735977 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:05.748957 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:51048.service: Deactivated successfully. Nov 23 23:06:05.753383 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:06:05.755571 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:06:05.762234 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:51060.service - OpenSSH per-connection server daemon (10.0.0.1:51060). Nov 23 23:06:05.764762 systemd-logind[1494]: Removed session 11. Nov 23 23:06:05.825377 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 51060 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:05.827403 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:05.833837 systemd-logind[1494]: New session 12 of user core. Nov 23 23:06:05.839826 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:06:06.001272 sshd[5027]: Connection closed by 10.0.0.1 port 51060 Nov 23 23:06:06.001679 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:06.006930 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:51060.service: Deactivated successfully. Nov 23 23:06:06.009783 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:06:06.012771 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:06:06.014768 systemd-logind[1494]: Removed session 12. Nov 23 23:06:11.016486 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:48124.service - OpenSSH per-connection server daemon (10.0.0.1:48124). Nov 23 23:06:11.088417 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 48124 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:11.091095 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:11.100173 systemd-logind[1494]: New session 13 of user core. Nov 23 23:06:11.108748 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:06:11.268368 containerd[1523]: time="2025-11-23T23:06:11.267730423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:11.282600 sshd[5058]: Connection closed by 10.0.0.1 port 48124 Nov 23 23:06:11.283316 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:11.292552 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:48124.service: Deactivated successfully. Nov 23 23:06:11.294916 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:06:11.295977 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:06:11.299675 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:48138.service - OpenSSH per-connection server daemon (10.0.0.1:48138). Nov 23 23:06:11.300411 systemd-logind[1494]: Removed session 13. Nov 23 23:06:11.369088 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 48138 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:11.370724 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:11.375866 systemd-logind[1494]: New session 14 of user core. Nov 23 23:06:11.382055 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:06:11.479063 containerd[1523]: time="2025-11-23T23:06:11.478885960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:11.483929 containerd[1523]: time="2025-11-23T23:06:11.483819545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:11.483929 containerd[1523]: time="2025-11-23T23:06:11.483869351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:11.484637 kubelet[2682]: E1123 23:06:11.484580 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:11.485035 kubelet[2682]: E1123 23:06:11.484643 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:11.485432 kubelet[2682]: E1123 23:06:11.485382 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54kxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-865b864c6b-zwbsd_calico-apiserver(4987122a-6b7a-47b0-a501-e31f8cdc6bd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:11.486595 kubelet[2682]: E1123 23:06:11.486556 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" podUID="4987122a-6b7a-47b0-a501-e31f8cdc6bd8" Nov 23 23:06:11.609449 sshd[5075]: Connection closed by 10.0.0.1 port 48138 Nov 23 23:06:11.609847 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:11.622031 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:48138.service: Deactivated successfully. Nov 23 23:06:11.624152 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:06:11.624943 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:06:11.628105 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:48154.service - OpenSSH per-connection server daemon (10.0.0.1:48154). Nov 23 23:06:11.629131 systemd-logind[1494]: Removed session 14. Nov 23 23:06:11.689140 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 48154 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:11.690823 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:11.695937 systemd-logind[1494]: New session 15 of user core. Nov 23 23:06:11.707738 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:06:12.260811 kubelet[2682]: E1123 23:06:12.260196 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d779b554-zljcz" podUID="31a178cc-f6e2-4156-8d63-c50d6b225cdb" Nov 23 23:06:12.262005 containerd[1523]: time="2025-11-23T23:06:12.261819349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:06:12.416130 sshd[5089]: Connection closed by 10.0.0.1 port 48154 Nov 23 23:06:12.417134 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:12.431718 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:48154.service: Deactivated successfully. Nov 23 23:06:12.437191 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:06:12.444600 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:06:12.450086 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:48160.service - OpenSSH per-connection server daemon (10.0.0.1:48160). Nov 23 23:06:12.453057 systemd-logind[1494]: Removed session 15. Nov 23 23:06:12.484257 containerd[1523]: time="2025-11-23T23:06:12.484211430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:12.485302 containerd[1523]: time="2025-11-23T23:06:12.485264265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:06:12.485450 containerd[1523]: time="2025-11-23T23:06:12.485346074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:12.485852 kubelet[2682]: E1123 23:06:12.485589 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:12.485852 kubelet[2682]: E1123 23:06:12.485648 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:12.485852 kubelet[2682]: E1123 23:06:12.485792 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9nr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mbtpf_calico-system(583c9149-22f9-45c3-9bb3-3e5f60548c49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:12.487325 kubelet[2682]: E1123 23:06:12.487274 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbtpf" podUID="583c9149-22f9-45c3-9bb3-3e5f60548c49" Nov 23 23:06:12.509440 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 48160 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:12.510969 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:12.515786 systemd-logind[1494]: New session 16 of user core. Nov 23 23:06:12.530753 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:06:12.840339 sshd[5113]: Connection closed by 10.0.0.1 port 48160 Nov 23 23:06:12.841759 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:12.853525 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:48160.service: Deactivated successfully. Nov 23 23:06:12.855714 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:06:12.856446 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:06:12.859336 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:48170.service - OpenSSH per-connection server daemon (10.0.0.1:48170). Nov 23 23:06:12.861246 systemd-logind[1494]: Removed session 16. Nov 23 23:06:12.921716 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 48170 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:12.925768 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:12.933008 systemd-logind[1494]: New session 17 of user core. Nov 23 23:06:12.945747 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:06:13.090282 sshd[5127]: Connection closed by 10.0.0.1 port 48170 Nov 23 23:06:13.090735 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:13.095435 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:06:13.097186 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:06:13.098003 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:48170.service: Deactivated successfully. Nov 23 23:06:13.101968 systemd-logind[1494]: Removed session 17. Nov 23 23:06:13.258947 containerd[1523]: time="2025-11-23T23:06:13.258883905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:13.463075 containerd[1523]: time="2025-11-23T23:06:13.463015086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:13.464086 containerd[1523]: time="2025-11-23T23:06:13.464021715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:13.464175 containerd[1523]: time="2025-11-23T23:06:13.464127366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:13.464520 kubelet[2682]: E1123 23:06:13.464351 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:13.464520 kubelet[2682]: E1123 23:06:13.464407 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:13.469489 kubelet[2682]: E1123 23:06:13.469419 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdnxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57f95fbbd5-bxqch_calico-apiserver(9258d3b7-8de8-4b94-bd45-9195727d4ddb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:13.471083 kubelet[2682]: E1123 23:06:13.471032 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" podUID="9258d3b7-8de8-4b94-bd45-9195727d4ddb" Nov 23 23:06:14.262306 containerd[1523]: time="2025-11-23T23:06:14.261046225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:14.463445 containerd[1523]: time="2025-11-23T23:06:14.463377324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:14.464651 containerd[1523]: time="2025-11-23T23:06:14.464593295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:14.464722 containerd[1523]: time="2025-11-23T23:06:14.464651341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:14.464869 kubelet[2682]: E1123 23:06:14.464813 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:14.465198 kubelet[2682]: E1123 23:06:14.464869 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:14.465198 kubelet[2682]: E1123 23:06:14.465005 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkj5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-865b864c6b-p6bp2_calico-apiserver(bab07697-3b96-415b-b1d7-632329a49d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:14.466220 kubelet[2682]: E1123 23:06:14.466150 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" podUID="bab07697-3b96-415b-b1d7-632329a49d75" Nov 23 23:06:15.260835 containerd[1523]: time="2025-11-23T23:06:15.259391890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:06:15.499903 containerd[1523]: time="2025-11-23T23:06:15.499852849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:15.501572 containerd[1523]: time="2025-11-23T23:06:15.501521186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:15.505555 containerd[1523]: time="2025-11-23T23:06:15.501924229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:06:15.505900 kubelet[2682]: E1123 23:06:15.502571 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:06:15.505900 kubelet[2682]: E1123 23:06:15.502629 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:06:15.505900 kubelet[2682]: E1123 23:06:15.503531 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5gts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f9f955d8c-k5vtv_calico-system(34a9e722-c2ac-40a8-8496-e24ef8260bba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:15.505900 kubelet[2682]: E1123 23:06:15.505707 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" podUID="34a9e722-c2ac-40a8-8496-e24ef8260bba" Nov 23 23:06:16.259714 containerd[1523]: time="2025-11-23T23:06:16.259396751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:06:16.468192 containerd[1523]: time="2025-11-23T23:06:16.468138951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:16.470645 containerd[1523]: time="2025-11-23T23:06:16.470593170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:06:16.470713 containerd[1523]: time="2025-11-23T23:06:16.470682059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:06:16.470876 kubelet[2682]: E1123 23:06:16.470837 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:16.470953 kubelet[2682]: E1123 23:06:16.470888 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:16.471045 kubelet[2682]: E1123 23:06:16.471001 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bjmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pdb4s_calico-system(06591145-f7c8-4eb9-86a0-ddb163a9822f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:16.474109 containerd[1523]: time="2025-11-23T23:06:16.474075977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:06:16.689335 containerd[1523]: time="2025-11-23T23:06:16.689165686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:16.699518 containerd[1523]: time="2025-11-23T23:06:16.699418927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:06:16.699645 containerd[1523]: time="2025-11-23T23:06:16.699518457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:06:16.700847 kubelet[2682]: E1123 23:06:16.699975 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:16.700847 kubelet[2682]: E1123 23:06:16.700026 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:16.700847 kubelet[2682]: E1123 23:06:16.700133 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bjmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pdb4s_calico-system(06591145-f7c8-4eb9-86a0-ddb163a9822f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:16.701471 kubelet[2682]: E1123 23:06:16.701401 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:06:18.104718 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:48184.service - OpenSSH per-connection server daemon (10.0.0.1:48184). Nov 23 23:06:18.177488 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 48184 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:18.178992 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:18.184674 systemd-logind[1494]: New session 18 of user core. Nov 23 23:06:18.194802 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:06:18.342847 sshd[5149]: Connection closed by 10.0.0.1 port 48184 Nov 23 23:06:18.343457 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:18.349559 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:48184.service: Deactivated successfully. Nov 23 23:06:18.357107 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:06:18.360816 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:06:18.362559 systemd-logind[1494]: Removed session 18. Nov 23 23:06:19.429803 kernel: hrtimer: interrupt took 2200084 ns Nov 23 23:06:19.534946 kubelet[2682]: E1123 23:06:19.534900 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:06:23.372434 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:44974.service - OpenSSH per-connection server daemon (10.0.0.1:44974). Nov 23 23:06:23.482046 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 44974 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:23.483252 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:23.488875 systemd-logind[1494]: New session 19 of user core. Nov 23 23:06:23.497724 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:06:23.667316 sshd[5194]: Connection closed by 10.0.0.1 port 44974 Nov 23 23:06:23.667701 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:23.671748 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:44974.service: Deactivated successfully. Nov 23 23:06:23.674105 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:06:23.674944 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:06:23.676181 systemd-logind[1494]: Removed session 19. Nov 23 23:06:25.258934 kubelet[2682]: E1123 23:06:25.258874 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-zwbsd" podUID="4987122a-6b7a-47b0-a501-e31f8cdc6bd8" Nov 23 23:06:26.259691 kubelet[2682]: E1123 23:06:26.259551 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbtpf" podUID="583c9149-22f9-45c3-9bb3-3e5f60548c49" Nov 23 23:06:26.259691 kubelet[2682]: E1123 23:06:26.259595 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-865b864c6b-p6bp2" podUID="bab07697-3b96-415b-b1d7-632329a49d75" Nov 23 23:06:26.260714 kubelet[2682]: E1123 23:06:26.260670 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57f95fbbd5-bxqch" podUID="9258d3b7-8de8-4b94-bd45-9195727d4ddb" Nov 23 23:06:27.258592 kubelet[2682]: E1123 23:06:27.258362 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:06:27.260024 containerd[1523]: time="2025-11-23T23:06:27.259943123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:06:27.261190 kubelet[2682]: E1123 23:06:27.260373 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pdb4s" podUID="06591145-f7c8-4eb9-86a0-ddb163a9822f" Nov 23 23:06:27.477123 containerd[1523]: time="2025-11-23T23:06:27.477077349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:27.485884 containerd[1523]: time="2025-11-23T23:06:27.485825517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:06:27.485996 containerd[1523]: time="2025-11-23T23:06:27.485862794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:06:27.486065 kubelet[2682]: E1123 23:06:27.486026 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:27.486156 kubelet[2682]: E1123 23:06:27.486076 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:27.486215 kubelet[2682]: E1123 23:06:27.486170 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8a9e35afce540048647bad466d551b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztm4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d779b554-zljcz_calico-system(31a178cc-f6e2-4156-8d63-c50d6b225cdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:27.488663 containerd[1523]: time="2025-11-23T23:06:27.488599409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:06:27.672998 containerd[1523]: time="2025-11-23T23:06:27.672934534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:27.673883 containerd[1523]: time="2025-11-23T23:06:27.673847832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:06:27.673955 containerd[1523]: time="2025-11-23T23:06:27.673879710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:27.674129 kubelet[2682]: E1123 23:06:27.674086 2682 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:27.674181 kubelet[2682]: E1123 23:06:27.674143 2682 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:27.674296 kubelet[2682]: E1123 23:06:27.674261 2682 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztm4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d779b554-zljcz_calico-system(31a178cc-f6e2-4156-8d63-c50d6b225cdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:27.675776 kubelet[2682]: E1123 23:06:27.675732 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d779b554-zljcz" podUID="31a178cc-f6e2-4156-8d63-c50d6b225cdb" Nov 23 23:06:28.690602 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:44978.service - OpenSSH per-connection server daemon (10.0.0.1:44978). Nov 23 23:06:28.743173 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 44978 ssh2: RSA SHA256:8pY4dKG4ac3Eq3heM2LjeBYvWpJQfs2D9Pb2ZBisysE Nov 23 23:06:28.744733 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:28.751654 systemd-logind[1494]: New session 20 of user core. Nov 23 23:06:28.760707 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:06:28.894786 sshd[5212]: Connection closed by 10.0.0.1 port 44978 Nov 23 23:06:28.897788 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:28.903911 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:44978.service: Deactivated successfully. Nov 23 23:06:28.909257 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:06:28.910444 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:06:28.912085 systemd-logind[1494]: Removed session 20. Nov 23 23:06:29.259279 kubelet[2682]: E1123 23:06:29.258863 2682 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f9f955d8c-k5vtv" podUID="34a9e722-c2ac-40a8-8496-e24ef8260bba"