Nov 7 23:48:51.426350 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 7 23:48:51.426377 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Nov 7 22:24:06 -00 2025 Nov 7 23:48:51.426386 kernel: KASLR enabled Nov 7 23:48:51.426392 kernel: efi: EFI v2.7 by EDK II Nov 7 23:48:51.426398 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 7 23:48:51.426404 kernel: random: crng init done Nov 7 23:48:51.426411 kernel: secureboot: Secure boot disabled Nov 7 23:48:51.426417 kernel: ACPI: Early table checksum verification disabled Nov 7 23:48:51.426425 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 7 23:48:51.426431 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 7 23:48:51.426438 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426445 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426451 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426457 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426466 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426473 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426480 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426487 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426494 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:48:51.426501 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 7 23:48:51.426508 kernel: ACPI: Use ACPI SPCR as default console: No Nov 7 23:48:51.426515 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 7 23:48:51.426523 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 7 23:48:51.426539 kernel: Zone ranges: Nov 7 23:48:51.426547 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 7 23:48:51.426553 kernel: DMA32 empty Nov 7 23:48:51.426560 kernel: Normal empty Nov 7 23:48:51.426566 kernel: Device empty Nov 7 23:48:51.426573 kernel: Movable zone start for each node Nov 7 23:48:51.426579 kernel: Early memory node ranges Nov 7 23:48:51.426586 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 7 23:48:51.426593 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 7 23:48:51.426599 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 7 23:48:51.426606 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 7 23:48:51.426615 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 7 23:48:51.426621 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 7 23:48:51.426628 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 7 23:48:51.426649 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 7 23:48:51.426656 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 7 23:48:51.426663 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 7 23:48:51.426676 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 7 23:48:51.426682 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 7 23:48:51.426690 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 7 23:48:51.426697 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 7 23:48:51.426704 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 7 23:48:51.426711 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 7 23:48:51.426719 kernel: psci: probing for conduit method from ACPI. Nov 7 23:48:51.426730 kernel: psci: PSCIv1.1 detected in firmware. Nov 7 23:48:51.426740 kernel: psci: Using standard PSCI v0.2 function IDs Nov 7 23:48:51.426747 kernel: psci: Trusted OS migration not required Nov 7 23:48:51.426756 kernel: psci: SMC Calling Convention v1.1 Nov 7 23:48:51.426766 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 7 23:48:51.426773 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 7 23:48:51.426783 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 7 23:48:51.426792 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 7 23:48:51.426800 kernel: Detected PIPT I-cache on CPU0 Nov 7 23:48:51.426809 kernel: CPU features: detected: GIC system register CPU interface Nov 7 23:48:51.426817 kernel: CPU features: detected: Spectre-v4 Nov 7 23:48:51.426823 kernel: CPU features: detected: Spectre-BHB Nov 7 23:48:51.426833 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 7 23:48:51.426840 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 7 23:48:51.426848 kernel: CPU features: detected: ARM erratum 1418040 Nov 7 23:48:51.426855 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 7 23:48:51.426862 kernel: alternatives: applying boot alternatives Nov 7 23:48:51.426874 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8bfefa4d5bf8d825e537335d2d0fa0f6d70ecdd5bfc7a28e4bcd37bbf7abce90 Nov 7 23:48:51.426882 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 7 23:48:51.426889 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 7 23:48:51.426896 kernel: Fallback order for Node 0: 0 Nov 7 23:48:51.426903 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 7 23:48:51.426911 kernel: Policy zone: DMA Nov 7 23:48:51.426918 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 7 23:48:51.426925 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 7 23:48:51.426932 kernel: software IO TLB: area num 4. Nov 7 23:48:51.426939 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 7 23:48:51.426947 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 7 23:48:51.426954 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 7 23:48:51.426961 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 7 23:48:51.426969 kernel: rcu: RCU event tracing is enabled. Nov 7 23:48:51.426976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 7 23:48:51.426984 kernel: Trampoline variant of Tasks RCU enabled. Nov 7 23:48:51.426992 kernel: Tracing variant of Tasks RCU enabled. Nov 7 23:48:51.426999 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 7 23:48:51.427006 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 7 23:48:51.427013 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 7 23:48:51.427020 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 7 23:48:51.427028 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 7 23:48:51.427035 kernel: GICv3: 256 SPIs implemented Nov 7 23:48:51.427042 kernel: GICv3: 0 Extended SPIs implemented Nov 7 23:48:51.427049 kernel: Root IRQ handler: gic_handle_irq Nov 7 23:48:51.427056 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 7 23:48:51.427063 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 7 23:48:51.427071 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 7 23:48:51.427079 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 7 23:48:51.427086 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 7 23:48:51.427093 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 7 23:48:51.427100 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 7 23:48:51.427108 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 7 23:48:51.427115 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 7 23:48:51.427122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:48:51.427129 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 7 23:48:51.427136 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 7 23:48:51.427143 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 7 23:48:51.427152 kernel: arm-pv: using stolen time PV Nov 7 23:48:51.427160 kernel: Console: colour dummy device 80x25 Nov 7 23:48:51.427168 kernel: ACPI: Core revision 20240827 Nov 7 23:48:51.427175 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 7 23:48:51.427183 kernel: pid_max: default: 32768 minimum: 301 Nov 7 23:48:51.427190 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 7 23:48:51.427198 kernel: landlock: Up and running. Nov 7 23:48:51.427205 kernel: SELinux: Initializing. Nov 7 23:48:51.427214 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 7 23:48:51.427222 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 7 23:48:51.427229 kernel: rcu: Hierarchical SRCU implementation. Nov 7 23:48:51.427237 kernel: rcu: Max phase no-delay instances is 400. Nov 7 23:48:51.427245 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 7 23:48:51.427252 kernel: Remapping and enabling EFI services. Nov 7 23:48:51.427259 kernel: smp: Bringing up secondary CPUs ... Nov 7 23:48:51.427269 kernel: Detected PIPT I-cache on CPU1 Nov 7 23:48:51.427281 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 7 23:48:51.427291 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 7 23:48:51.427299 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:48:51.427307 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 7 23:48:51.427315 kernel: Detected PIPT I-cache on CPU2 Nov 7 23:48:51.427323 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 7 23:48:51.427333 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 7 23:48:51.427341 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:48:51.427349 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 7 23:48:51.427357 kernel: Detected PIPT I-cache on CPU3 Nov 7 23:48:51.427365 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 7 23:48:51.427373 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 7 23:48:51.427381 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:48:51.427389 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 7 23:48:51.427397 kernel: smp: Brought up 1 node, 4 CPUs Nov 7 23:48:51.427405 kernel: SMP: Total of 4 processors activated. Nov 7 23:48:51.427413 kernel: CPU: All CPU(s) started at EL1 Nov 7 23:48:51.427420 kernel: CPU features: detected: 32-bit EL0 Support Nov 7 23:48:51.427429 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 7 23:48:51.427436 kernel: CPU features: detected: Common not Private translations Nov 7 23:48:51.427446 kernel: CPU features: detected: CRC32 instructions Nov 7 23:48:51.427453 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 7 23:48:51.427461 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 7 23:48:51.427469 kernel: CPU features: detected: LSE atomic instructions Nov 7 23:48:51.427476 kernel: CPU features: detected: Privileged Access Never Nov 7 23:48:51.427484 kernel: CPU features: detected: RAS Extension Support Nov 7 23:48:51.427492 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 7 23:48:51.427501 kernel: alternatives: applying system-wide alternatives Nov 7 23:48:51.427511 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 7 23:48:51.427519 kernel: Memory: 2450272K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 13120K init, 1038K bss, 99680K reserved, 16384K cma-reserved) Nov 7 23:48:51.427527 kernel: devtmpfs: initialized Nov 7 23:48:51.427543 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 7 23:48:51.427553 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 7 23:48:51.427561 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 7 23:48:51.427570 kernel: 0 pages in range for non-PLT usage Nov 7 23:48:51.427581 kernel: 515024 pages in range for PLT usage Nov 7 23:48:51.427589 kernel: pinctrl core: initialized pinctrl subsystem Nov 7 23:48:51.427596 kernel: SMBIOS 3.0.0 present. Nov 7 23:48:51.427604 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 7 23:48:51.427612 kernel: DMI: Memory slots populated: 1/1 Nov 7 23:48:51.427620 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 7 23:48:51.427628 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 7 23:48:51.427646 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 7 23:48:51.427654 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 7 23:48:51.427663 kernel: audit: initializing netlink subsys (disabled) Nov 7 23:48:51.427671 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Nov 7 23:48:51.427755 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 7 23:48:51.427766 kernel: cpuidle: using governor menu Nov 7 23:48:51.427774 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 7 23:48:51.427786 kernel: ASID allocator initialised with 32768 entries Nov 7 23:48:51.427795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 7 23:48:51.427803 kernel: Serial: AMBA PL011 UART driver Nov 7 23:48:51.427811 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 7 23:48:51.427819 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 7 23:48:51.427827 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 7 23:48:51.427835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 7 23:48:51.427843 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 7 23:48:51.427859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 7 23:48:51.427868 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 7 23:48:51.427876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 7 23:48:51.427884 kernel: ACPI: Added _OSI(Module Device) Nov 7 23:48:51.427892 kernel: ACPI: Added _OSI(Processor Device) Nov 7 23:48:51.427900 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 7 23:48:51.427909 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 7 23:48:51.427918 kernel: ACPI: Interpreter enabled Nov 7 23:48:51.427926 kernel: ACPI: Using GIC for interrupt routing Nov 7 23:48:51.427934 kernel: ACPI: MCFG table detected, 1 entries Nov 7 23:48:51.427942 kernel: ACPI: CPU0 has been hot-added Nov 7 23:48:51.427950 kernel: ACPI: CPU1 has been hot-added Nov 7 23:48:51.427957 kernel: ACPI: CPU2 has been hot-added Nov 7 23:48:51.427965 kernel: ACPI: CPU3 has been hot-added Nov 7 23:48:51.427973 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 7 23:48:51.427982 kernel: printk: legacy console [ttyAMA0] enabled Nov 7 23:48:51.427990 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 7 23:48:51.428187 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 7 23:48:51.428282 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 7 23:48:51.428372 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 7 23:48:51.428459 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 7 23:48:51.428574 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 7 23:48:51.428586 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 7 23:48:51.428594 kernel: PCI host bridge to bus 0000:00 Nov 7 23:48:51.428747 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 7 23:48:51.428831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 7 23:48:51.428912 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 7 23:48:51.428985 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 7 23:48:51.429088 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 7 23:48:51.429182 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 7 23:48:51.429273 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 7 23:48:51.429374 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 7 23:48:51.429460 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 7 23:48:51.429550 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 7 23:48:51.429650 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 7 23:48:51.429740 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 7 23:48:51.429817 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 7 23:48:51.429891 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 7 23:48:51.429968 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 7 23:48:51.429978 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 7 23:48:51.429987 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 7 23:48:51.429995 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 7 23:48:51.430003 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 7 23:48:51.430011 kernel: iommu: Default domain type: Translated Nov 7 23:48:51.430021 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 7 23:48:51.430029 kernel: efivars: Registered efivars operations Nov 7 23:48:51.430037 kernel: vgaarb: loaded Nov 7 23:48:51.430045 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 7 23:48:51.430053 kernel: VFS: Disk quotas dquot_6.6.0 Nov 7 23:48:51.430061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 7 23:48:51.430069 kernel: pnp: PnP ACPI init Nov 7 23:48:51.430172 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 7 23:48:51.430184 kernel: pnp: PnP ACPI: found 1 devices Nov 7 23:48:51.430192 kernel: NET: Registered PF_INET protocol family Nov 7 23:48:51.430200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 7 23:48:51.430208 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 7 23:48:51.430217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 7 23:48:51.430225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 7 23:48:51.430236 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 7 23:48:51.430244 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 7 23:48:51.430252 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 7 23:48:51.430262 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 7 23:48:51.430270 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 7 23:48:51.430278 kernel: PCI: CLS 0 bytes, default 64 Nov 7 23:48:51.430287 kernel: kvm [1]: HYP mode not available Nov 7 23:48:51.430297 kernel: Initialise system trusted keyrings Nov 7 23:48:51.430305 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 7 23:48:51.430313 kernel: Key type asymmetric registered Nov 7 23:48:51.430322 kernel: Asymmetric key parser 'x509' registered Nov 7 23:48:51.430330 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 7 23:48:51.430338 kernel: io scheduler mq-deadline registered Nov 7 23:48:51.430346 kernel: io scheduler kyber registered Nov 7 23:48:51.430357 kernel: io scheduler bfq registered Nov 7 23:48:51.430370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 7 23:48:51.430378 kernel: ACPI: button: Power Button [PWRB] Nov 7 23:48:51.430387 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 7 23:48:51.430469 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 7 23:48:51.430480 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 7 23:48:51.430488 kernel: thunder_xcv, ver 1.0 Nov 7 23:48:51.430498 kernel: thunder_bgx, ver 1.0 Nov 7 23:48:51.430506 kernel: nicpf, ver 1.0 Nov 7 23:48:51.430514 kernel: nicvf, ver 1.0 Nov 7 23:48:51.430620 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 7 23:48:51.430718 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-07T23:48:50 UTC (1762559330) Nov 7 23:48:51.430730 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 7 23:48:51.430738 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 7 23:48:51.430749 kernel: watchdog: NMI not fully supported Nov 7 23:48:51.430757 kernel: watchdog: Hard watchdog permanently disabled Nov 7 23:48:51.430765 kernel: NET: Registered PF_INET6 protocol family Nov 7 23:48:51.430773 kernel: Segment Routing with IPv6 Nov 7 23:48:51.430781 kernel: In-situ OAM (IOAM) with IPv6 Nov 7 23:48:51.430789 kernel: NET: Registered PF_PACKET protocol family Nov 7 23:48:51.430797 kernel: Key type dns_resolver registered Nov 7 23:48:51.430806 kernel: registered taskstats version 1 Nov 7 23:48:51.430814 kernel: Loading compiled-in X.509 certificates Nov 7 23:48:51.430822 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ebe7e9737da4c34f192c530d79f3cb246d03fd74' Nov 7 23:48:51.430830 kernel: Demotion targets for Node 0: null Nov 7 23:48:51.430838 kernel: Key type .fscrypt registered Nov 7 23:48:51.430845 kernel: Key type fscrypt-provisioning registered Nov 7 23:48:51.430853 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 7 23:48:51.430863 kernel: ima: Allocated hash algorithm: sha1 Nov 7 23:48:51.430871 kernel: ima: No architecture policies found Nov 7 23:48:51.430879 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 7 23:48:51.430887 kernel: clk: Disabling unused clocks Nov 7 23:48:51.430895 kernel: PM: genpd: Disabling unused power domains Nov 7 23:48:51.430903 kernel: Freeing unused kernel memory: 13120K Nov 7 23:48:51.430911 kernel: Run /init as init process Nov 7 23:48:51.430920 kernel: with arguments: Nov 7 23:48:51.430928 kernel: /init Nov 7 23:48:51.430936 kernel: with environment: Nov 7 23:48:51.430944 kernel: HOME=/ Nov 7 23:48:51.430952 kernel: TERM=linux Nov 7 23:48:51.431051 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 7 23:48:51.431133 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 7 23:48:51.431146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 7 23:48:51.431154 kernel: GPT:16515071 != 27000831 Nov 7 23:48:51.431161 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 7 23:48:51.431170 kernel: GPT:16515071 != 27000831 Nov 7 23:48:51.431184 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 7 23:48:51.431197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 7 23:48:51.431207 kernel: SCSI subsystem initialized Nov 7 23:48:51.431215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 7 23:48:51.431223 kernel: device-mapper: uevent: version 1.0.3 Nov 7 23:48:51.431231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 7 23:48:51.431239 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 7 23:48:51.431247 kernel: raid6: neonx8 gen() 15744 MB/s Nov 7 23:48:51.431255 kernel: raid6: neonx4 gen() 15818 MB/s Nov 7 23:48:51.431264 kernel: raid6: neonx2 gen() 13227 MB/s Nov 7 23:48:51.431272 kernel: raid6: neonx1 gen() 10415 MB/s Nov 7 23:48:51.431279 kernel: raid6: int64x8 gen() 6899 MB/s Nov 7 23:48:51.431287 kernel: raid6: int64x4 gen() 7346 MB/s Nov 7 23:48:51.431295 kernel: raid6: int64x2 gen() 6102 MB/s Nov 7 23:48:51.431302 kernel: raid6: int64x1 gen() 5047 MB/s Nov 7 23:48:51.431310 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s Nov 7 23:48:51.431318 kernel: raid6: .... xor() 12364 MB/s, rmw enabled Nov 7 23:48:51.431328 kernel: raid6: using neon recovery algorithm Nov 7 23:48:51.431335 kernel: xor: measuring software checksum speed Nov 7 23:48:51.431343 kernel: 8regs : 19294 MB/sec Nov 7 23:48:51.431351 kernel: 32regs : 21641 MB/sec Nov 7 23:48:51.431359 kernel: arm64_neon : 28032 MB/sec Nov 7 23:48:51.431367 kernel: xor: using function: arm64_neon (28032 MB/sec) Nov 7 23:48:51.431375 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 7 23:48:51.431385 kernel: BTRFS: device fsid 55631b0a-1ca9-4494-9c87-5a8b2623813a devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (205) Nov 7 23:48:51.431393 kernel: BTRFS info (device dm-0): first mount of filesystem 55631b0a-1ca9-4494-9c87-5a8b2623813a Nov 7 23:48:51.431402 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:48:51.431410 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 7 23:48:51.431419 kernel: BTRFS info (device dm-0): enabling free space tree Nov 7 23:48:51.431427 kernel: loop: module loaded Nov 7 23:48:51.431435 kernel: loop0: detected capacity change from 0 to 91464 Nov 7 23:48:51.431444 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 7 23:48:51.431453 systemd[1]: Successfully made /usr/ read-only. Nov 7 23:48:51.431464 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 7 23:48:51.431472 systemd[1]: Detected virtualization kvm. Nov 7 23:48:51.431481 systemd[1]: Detected architecture arm64. Nov 7 23:48:51.431488 systemd[1]: Running in initrd. Nov 7 23:48:51.431498 systemd[1]: No hostname configured, using default hostname. Nov 7 23:48:51.431507 systemd[1]: Hostname set to . Nov 7 23:48:51.431515 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 7 23:48:51.431523 systemd[1]: Queued start job for default target initrd.target. Nov 7 23:48:51.431540 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 7 23:48:51.431549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 7 23:48:51.431560 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 7 23:48:51.431569 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 7 23:48:51.431578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 7 23:48:51.431587 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 7 23:48:51.431596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 7 23:48:51.431604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 7 23:48:51.431614 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 7 23:48:51.431622 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 7 23:48:51.431639 systemd[1]: Reached target paths.target - Path Units. Nov 7 23:48:51.431648 systemd[1]: Reached target slices.target - Slice Units. Nov 7 23:48:51.431657 systemd[1]: Reached target swap.target - Swaps. Nov 7 23:48:51.431665 systemd[1]: Reached target timers.target - Timer Units. Nov 7 23:48:51.431676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 7 23:48:51.431684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 7 23:48:51.431693 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 7 23:48:51.431701 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 7 23:48:51.431717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 7 23:48:51.431728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 7 23:48:51.431737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 7 23:48:51.431746 systemd[1]: Reached target sockets.target - Socket Units. Nov 7 23:48:51.431755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 7 23:48:51.431767 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 7 23:48:51.431777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 7 23:48:51.431785 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 7 23:48:51.431796 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 7 23:48:51.431805 systemd[1]: Starting systemd-fsck-usr.service... Nov 7 23:48:51.431814 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 7 23:48:51.431824 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 7 23:48:51.431833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:48:51.431844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 7 23:48:51.431853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 7 23:48:51.431863 systemd[1]: Finished systemd-fsck-usr.service. Nov 7 23:48:51.431872 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 7 23:48:51.431881 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 7 23:48:51.431913 systemd-journald[346]: Collecting audit messages is disabled. Nov 7 23:48:51.431934 kernel: Bridge firewalling registered Nov 7 23:48:51.431943 systemd-journald[346]: Journal started Nov 7 23:48:51.431964 systemd-journald[346]: Runtime Journal (/run/log/journal/49cdadf162224cebb40a3afbdaffdde5) is 6M, max 48.5M, 42.4M free. Nov 7 23:48:51.427480 systemd-modules-load[348]: Inserted module 'br_netfilter' Nov 7 23:48:51.436239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 7 23:48:51.440038 systemd[1]: Started systemd-journald.service - Journal Service. Nov 7 23:48:51.440801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:48:51.445227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 7 23:48:51.450923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 7 23:48:51.452978 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 7 23:48:51.455208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 7 23:48:51.477415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 7 23:48:51.490981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 7 23:48:51.492090 systemd-tmpfiles[370]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 7 23:48:51.493377 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 7 23:48:51.497488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 7 23:48:51.500444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 7 23:48:51.503766 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 7 23:48:51.506222 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 7 23:48:51.530939 dracut-cmdline[387]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8bfefa4d5bf8d825e537335d2d0fa0f6d70ecdd5bfc7a28e4bcd37bbf7abce90 Nov 7 23:48:51.555727 systemd-resolved[388]: Positive Trust Anchors: Nov 7 23:48:51.555748 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 7 23:48:51.555751 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 7 23:48:51.555783 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 7 23:48:51.582450 systemd-resolved[388]: Defaulting to hostname 'linux'. Nov 7 23:48:51.583600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 7 23:48:51.584831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 7 23:48:51.630683 kernel: Loading iSCSI transport class v2.0-870. Nov 7 23:48:51.638671 kernel: iscsi: registered transport (tcp) Nov 7 23:48:51.652669 kernel: iscsi: registered transport (qla4xxx) Nov 7 23:48:51.652717 kernel: QLogic iSCSI HBA Driver Nov 7 23:48:51.676323 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 7 23:48:51.697937 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 7 23:48:51.700380 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 7 23:48:51.750106 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 7 23:48:51.752819 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 7 23:48:51.754695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 7 23:48:51.806298 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 7 23:48:51.809244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 7 23:48:51.839348 systemd-udevd[628]: Using default interface naming scheme 'v257'. Nov 7 23:48:51.847479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 7 23:48:51.851862 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 7 23:48:51.881640 dracut-pre-trigger[690]: rd.md=0: removing MD RAID activation Nov 7 23:48:51.888365 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 7 23:48:51.891660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 7 23:48:51.910694 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 7 23:48:51.913844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 7 23:48:51.941911 systemd-networkd[745]: lo: Link UP Nov 7 23:48:51.941921 systemd-networkd[745]: lo: Gained carrier Nov 7 23:48:51.942715 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 7 23:48:51.943900 systemd[1]: Reached target network.target - Network. Nov 7 23:48:51.977192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 7 23:48:51.982014 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 7 23:48:52.031549 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 7 23:48:52.044285 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 7 23:48:52.052342 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 7 23:48:52.059604 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 7 23:48:52.061976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 7 23:48:52.079579 systemd-networkd[745]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:48:52.079593 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 7 23:48:52.080650 systemd-networkd[745]: eth0: Link UP Nov 7 23:48:52.081152 systemd-networkd[745]: eth0: Gained carrier Nov 7 23:48:52.081166 systemd-networkd[745]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:48:52.085175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 7 23:48:52.085306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:48:52.092301 disk-uuid[801]: Primary Header is updated. Nov 7 23:48:52.092301 disk-uuid[801]: Secondary Entries is updated. Nov 7 23:48:52.092301 disk-uuid[801]: Secondary Header is updated. Nov 7 23:48:52.086939 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:48:52.094135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:48:52.109729 systemd-networkd[745]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 7 23:48:52.140153 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 7 23:48:52.144379 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 7 23:48:52.146995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 7 23:48:52.148377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 7 23:48:52.153748 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 7 23:48:52.155685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:48:52.174726 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 7 23:48:53.125570 disk-uuid[802]: Warning: The kernel is still using the old partition table. Nov 7 23:48:53.125570 disk-uuid[802]: The new table will be used at the next reboot or after you Nov 7 23:48:53.125570 disk-uuid[802]: run partprobe(8) or kpartx(8) Nov 7 23:48:53.125570 disk-uuid[802]: The operation has completed successfully. Nov 7 23:48:53.134850 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 7 23:48:53.134960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 7 23:48:53.137225 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 7 23:48:53.168242 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Nov 7 23:48:53.168288 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:48:53.168299 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:48:53.174086 kernel: BTRFS info (device vda6): turning on async discard Nov 7 23:48:53.174151 kernel: BTRFS info (device vda6): enabling free space tree Nov 7 23:48:53.180658 kernel: BTRFS info (device vda6): last unmount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:48:53.180818 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 7 23:48:53.182925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 7 23:48:53.295666 ignition[855]: Ignition 2.22.0 Nov 7 23:48:53.296531 ignition[855]: Stage: fetch-offline Nov 7 23:48:53.296596 ignition[855]: no configs at "/usr/lib/ignition/base.d" Nov 7 23:48:53.296609 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:48:53.296717 ignition[855]: parsed url from cmdline: "" Nov 7 23:48:53.296721 ignition[855]: no config URL provided Nov 7 23:48:53.296726 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Nov 7 23:48:53.296736 ignition[855]: no config at "/usr/lib/ignition/user.ign" Nov 7 23:48:53.296778 ignition[855]: op(1): [started] loading QEMU firmware config module Nov 7 23:48:53.296782 ignition[855]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 7 23:48:53.304567 ignition[855]: op(1): [finished] loading QEMU firmware config module Nov 7 23:48:53.351058 ignition[855]: parsing config with SHA512: 4a1174baddfc943e0c7c0db71c4593e24f460c084e28abd415e413dec224b5009fae141f0dc4eb7045536a4faf41bb77dc0ae14ab83d7f26d0ea64b199fd6aed Nov 7 23:48:53.356283 unknown[855]: fetched base config from "system" Nov 7 23:48:53.356299 unknown[855]: fetched user config from "qemu" Nov 7 23:48:53.356859 ignition[855]: fetch-offline: fetch-offline passed Nov 7 23:48:53.356933 ignition[855]: Ignition finished successfully Nov 7 23:48:53.360283 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 7 23:48:53.362125 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 7 23:48:53.362991 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 7 23:48:53.409697 ignition[869]: Ignition 2.22.0 Nov 7 23:48:53.409718 ignition[869]: Stage: kargs Nov 7 23:48:53.409873 ignition[869]: no configs at "/usr/lib/ignition/base.d" Nov 7 23:48:53.409881 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:48:53.410662 ignition[869]: kargs: kargs passed Nov 7 23:48:53.413612 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 7 23:48:53.410710 ignition[869]: Ignition finished successfully Nov 7 23:48:53.415615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 7 23:48:53.451475 ignition[877]: Ignition 2.22.0 Nov 7 23:48:53.451492 ignition[877]: Stage: disks Nov 7 23:48:53.451669 ignition[877]: no configs at "/usr/lib/ignition/base.d" Nov 7 23:48:53.451678 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:48:53.452583 ignition[877]: disks: disks passed Nov 7 23:48:53.455130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 7 23:48:53.452657 ignition[877]: Ignition finished successfully Nov 7 23:48:53.456473 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 7 23:48:53.458066 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 7 23:48:53.459780 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 7 23:48:53.461562 systemd[1]: Reached target sysinit.target - System Initialization. Nov 7 23:48:53.463570 systemd[1]: Reached target basic.target - Basic System. Nov 7 23:48:53.466235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 7 23:48:53.510336 systemd-fsck[888]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 7 23:48:53.515338 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 7 23:48:53.517743 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 7 23:48:53.600678 kernel: EXT4-fs (vda9): mounted filesystem 12d1c98d-1cd5-4af6-bfe4-c8600a1c2a61 r/w with ordered data mode. Quota mode: none. Nov 7 23:48:53.601119 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 7 23:48:53.602449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 7 23:48:53.605676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 7 23:48:53.607536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 7 23:48:53.608606 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 7 23:48:53.608655 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 7 23:48:53.608685 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 7 23:48:53.622332 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 7 23:48:53.625105 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 7 23:48:53.627756 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (896) Nov 7 23:48:53.629655 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:48:53.629697 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:48:53.632785 kernel: BTRFS info (device vda6): turning on async discard Nov 7 23:48:53.632818 kernel: BTRFS info (device vda6): enabling free space tree Nov 7 23:48:53.633881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 7 23:48:53.684176 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Nov 7 23:48:53.688140 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Nov 7 23:48:53.691806 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Nov 7 23:48:53.695088 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Nov 7 23:48:53.794486 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 7 23:48:53.796741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 7 23:48:53.798504 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 7 23:48:53.820978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 7 23:48:53.822524 kernel: BTRFS info (device vda6): last unmount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:48:53.829749 systemd-networkd[745]: eth0: Gained IPv6LL Nov 7 23:48:53.835802 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 7 23:48:53.851848 ignition[1010]: INFO : Ignition 2.22.0 Nov 7 23:48:53.851848 ignition[1010]: INFO : Stage: mount Nov 7 23:48:53.854377 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 7 23:48:53.854377 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:48:53.854377 ignition[1010]: INFO : mount: mount passed Nov 7 23:48:53.854377 ignition[1010]: INFO : Ignition finished successfully Nov 7 23:48:53.855227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 7 23:48:53.857422 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 7 23:48:54.602758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 7 23:48:54.621649 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Nov 7 23:48:54.624664 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:48:54.624748 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:48:54.627143 kernel: BTRFS info (device vda6): turning on async discard Nov 7 23:48:54.627177 kernel: BTRFS info (device vda6): enabling free space tree Nov 7 23:48:54.628854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 7 23:48:54.664328 ignition[1040]: INFO : Ignition 2.22.0 Nov 7 23:48:54.664328 ignition[1040]: INFO : Stage: files Nov 7 23:48:54.666117 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 7 23:48:54.666117 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:48:54.668449 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Nov 7 23:48:54.669728 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 7 23:48:54.669728 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 7 23:48:54.672734 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 7 23:48:54.674353 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 7 23:48:54.674353 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 7 23:48:54.673331 unknown[1040]: wrote ssh authorized keys file for user: core Nov 7 23:48:54.678169 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 7 23:48:54.678169 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 7 23:48:54.719044 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 7 23:48:54.856661 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 7 23:48:54.856661 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 7 23:48:54.860566 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 7 23:48:54.873110 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 7 23:48:54.873110 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 7 23:48:54.873110 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 7 23:48:54.873110 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 7 23:48:54.873110 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 7 23:48:54.873110 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 7 23:48:55.168171 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 7 23:48:55.538197 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 7 23:48:55.538197 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 7 23:48:55.541949 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 7 23:48:55.543839 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 7 23:48:55.558830 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 7 23:48:55.563675 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 7 23:48:55.565195 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 7 23:48:55.565195 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 7 23:48:55.565195 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 7 23:48:55.565195 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 7 23:48:55.565195 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 7 23:48:55.565195 ignition[1040]: INFO : files: files passed Nov 7 23:48:55.565195 ignition[1040]: INFO : Ignition finished successfully Nov 7 23:48:55.565955 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 7 23:48:55.569016 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 7 23:48:55.571050 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 7 23:48:55.597409 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 7 23:48:55.597534 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 7 23:48:55.601870 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Nov 7 23:48:55.604566 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 7 23:48:55.604566 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 7 23:48:55.608621 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 7 23:48:55.607768 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 7 23:48:55.610089 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 7 23:48:55.613780 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 7 23:48:55.658569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 7 23:48:55.658712 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 7 23:48:55.660984 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 7 23:48:55.662692 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 7 23:48:55.664916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 7 23:48:55.665866 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 7 23:48:55.692716 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 7 23:48:55.695504 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 7 23:48:55.729387 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 7 23:48:55.729727 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 7 23:48:55.732021 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 7 23:48:55.734039 systemd[1]: Stopped target timers.target - Timer Units. Nov 7 23:48:55.736905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 7 23:48:55.737050 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 7 23:48:55.739623 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 7 23:48:55.741858 systemd[1]: Stopped target basic.target - Basic System. Nov 7 23:48:55.743523 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 7 23:48:55.745280 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 7 23:48:55.747225 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 7 23:48:55.749473 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 7 23:48:55.752284 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 7 23:48:55.754215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 7 23:48:55.756944 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 7 23:48:55.758859 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 7 23:48:55.761862 systemd[1]: Stopped target swap.target - Swaps. Nov 7 23:48:55.763626 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 7 23:48:55.763792 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 7 23:48:55.766873 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 7 23:48:55.769128 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 7 23:48:55.771039 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 7 23:48:55.771731 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 7 23:48:55.773137 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 7 23:48:55.773274 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 7 23:48:55.775968 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 7 23:48:55.776103 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 7 23:48:55.777997 systemd[1]: Stopped target paths.target - Path Units. Nov 7 23:48:55.779546 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 7 23:48:55.784699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 7 23:48:55.786021 systemd[1]: Stopped target slices.target - Slice Units. Nov 7 23:48:55.788155 systemd[1]: Stopped target sockets.target - Socket Units. Nov 7 23:48:55.789770 systemd[1]: iscsid.socket: Deactivated successfully. Nov 7 23:48:55.789866 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 7 23:48:55.791456 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 7 23:48:55.791608 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 7 23:48:55.793171 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 7 23:48:55.793294 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 7 23:48:55.795068 systemd[1]: ignition-files.service: Deactivated successfully. Nov 7 23:48:55.795179 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 7 23:48:55.797731 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 7 23:48:55.800450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 7 23:48:55.801494 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 7 23:48:55.801654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 7 23:48:55.803784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 7 23:48:55.803900 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 7 23:48:55.805793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 7 23:48:55.805902 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 7 23:48:55.811681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 7 23:48:55.817820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 7 23:48:55.828851 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 7 23:48:55.834036 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 7 23:48:55.835690 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 7 23:48:55.838117 ignition[1096]: INFO : Ignition 2.22.0 Nov 7 23:48:55.838117 ignition[1096]: INFO : Stage: umount Nov 7 23:48:55.838117 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 7 23:48:55.838117 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:48:55.838117 ignition[1096]: INFO : umount: umount passed Nov 7 23:48:55.838117 ignition[1096]: INFO : Ignition finished successfully Nov 7 23:48:55.839343 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 7 23:48:55.839477 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 7 23:48:55.841160 systemd[1]: Stopped target network.target - Network. Nov 7 23:48:55.842563 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 7 23:48:55.842709 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 7 23:48:55.844648 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 7 23:48:55.844741 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 7 23:48:55.846334 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 7 23:48:55.846395 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 7 23:48:55.848312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 7 23:48:55.848364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 7 23:48:55.850134 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 7 23:48:55.850192 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 7 23:48:55.852153 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 7 23:48:55.853651 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 7 23:48:55.857311 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 7 23:48:55.857433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 7 23:48:55.871857 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 7 23:48:55.872713 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 7 23:48:55.879206 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 7 23:48:55.880349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 7 23:48:55.880392 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 7 23:48:55.885170 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 7 23:48:55.886249 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 7 23:48:55.886338 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 7 23:48:55.888394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 7 23:48:55.888448 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 7 23:48:55.890212 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 7 23:48:55.890266 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 7 23:48:55.892418 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 7 23:48:55.902175 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 7 23:48:55.902341 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 7 23:48:55.904921 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 7 23:48:55.904967 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 7 23:48:55.906808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 7 23:48:55.906842 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 7 23:48:55.908748 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 7 23:48:55.908809 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 7 23:48:55.911691 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 7 23:48:55.911750 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 7 23:48:55.914415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 7 23:48:55.914470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 7 23:48:55.918223 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 7 23:48:55.919300 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 7 23:48:55.919376 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 7 23:48:55.921437 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 7 23:48:55.921492 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 7 23:48:55.923704 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 7 23:48:55.923753 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 7 23:48:55.925828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 7 23:48:55.925876 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 7 23:48:55.927894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 7 23:48:55.927950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:48:55.937945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 7 23:48:55.938072 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 7 23:48:55.940417 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 7 23:48:55.940535 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 7 23:48:55.942699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 7 23:48:55.944964 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 7 23:48:55.961268 systemd[1]: Switching root. Nov 7 23:48:55.987983 systemd-journald[346]: Journal stopped Nov 7 23:48:56.879568 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Nov 7 23:48:56.879627 kernel: SELinux: policy capability network_peer_controls=1 Nov 7 23:48:56.879661 kernel: SELinux: policy capability open_perms=1 Nov 7 23:48:56.879672 kernel: SELinux: policy capability extended_socket_class=1 Nov 7 23:48:56.879684 kernel: SELinux: policy capability always_check_network=0 Nov 7 23:48:56.879697 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 7 23:48:56.879707 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 7 23:48:56.879718 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 7 23:48:56.879727 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 7 23:48:56.879737 kernel: SELinux: policy capability userspace_initial_context=0 Nov 7 23:48:56.879747 kernel: audit: type=1403 audit(1762559336.209:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 7 23:48:56.879759 systemd[1]: Successfully loaded SELinux policy in 65.155ms. Nov 7 23:48:56.879776 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.031ms. Nov 7 23:48:56.879787 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 7 23:48:56.879799 systemd[1]: Detected virtualization kvm. Nov 7 23:48:56.879810 systemd[1]: Detected architecture arm64. Nov 7 23:48:56.879821 systemd[1]: Detected first boot. Nov 7 23:48:56.879832 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 7 23:48:56.879851 zram_generator::config[1146]: No configuration found. Nov 7 23:48:56.879864 kernel: NET: Registered PF_VSOCK protocol family Nov 7 23:48:56.879874 systemd[1]: Populated /etc with preset unit settings. Nov 7 23:48:56.879888 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 7 23:48:56.879899 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 7 23:48:56.879910 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 7 23:48:56.879923 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 7 23:48:56.879934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 7 23:48:56.879945 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 7 23:48:56.879955 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 7 23:48:56.879966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 7 23:48:56.879978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 7 23:48:56.879989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 7 23:48:56.880002 systemd[1]: Created slice user.slice - User and Session Slice. Nov 7 23:48:56.880013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 7 23:48:56.880023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 7 23:48:56.880035 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 7 23:48:56.880046 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 7 23:48:56.880060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 7 23:48:56.880072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 7 23:48:56.880085 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 7 23:48:56.880097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 7 23:48:56.880109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 7 23:48:56.880119 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 7 23:48:56.880130 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 7 23:48:56.880141 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 7 23:48:56.880154 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 7 23:48:56.880164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 7 23:48:56.880176 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 7 23:48:56.880187 systemd[1]: Reached target slices.target - Slice Units. Nov 7 23:48:56.880197 systemd[1]: Reached target swap.target - Swaps. Nov 7 23:48:56.880208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 7 23:48:56.880219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 7 23:48:56.880231 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 7 23:48:56.880243 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 7 23:48:56.880255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 7 23:48:56.880265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 7 23:48:56.880276 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 7 23:48:56.880288 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 7 23:48:56.880299 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 7 23:48:56.880310 systemd[1]: Mounting media.mount - External Media Directory... Nov 7 23:48:56.880322 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 7 23:48:56.880333 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 7 23:48:56.880345 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 7 23:48:56.880357 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 7 23:48:56.880368 systemd[1]: Reached target machines.target - Containers. Nov 7 23:48:56.880379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 7 23:48:56.880391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 7 23:48:56.880404 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 7 23:48:56.880415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 7 23:48:56.880426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 7 23:48:56.880438 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 7 23:48:56.880448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 7 23:48:56.880459 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 7 23:48:56.880472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 7 23:48:56.880484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 7 23:48:56.880496 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 7 23:48:56.880515 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 7 23:48:56.880528 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 7 23:48:56.880539 systemd[1]: Stopped systemd-fsck-usr.service. Nov 7 23:48:56.880550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 7 23:48:56.880563 kernel: fuse: init (API version 7.41) Nov 7 23:48:56.880573 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 7 23:48:56.880584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 7 23:48:56.880595 kernel: ACPI: bus type drm_connector registered Nov 7 23:48:56.880610 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 7 23:48:56.880622 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 7 23:48:56.880641 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 7 23:48:56.880658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 7 23:48:56.880671 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 7 23:48:56.880682 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 7 23:48:56.880692 systemd[1]: Mounted media.mount - External Media Directory. Nov 7 23:48:56.880707 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 7 23:48:56.880718 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 7 23:48:56.880729 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 7 23:48:56.880761 systemd-journald[1214]: Collecting audit messages is disabled. Nov 7 23:48:56.880786 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 7 23:48:56.880799 systemd-journald[1214]: Journal started Nov 7 23:48:56.880822 systemd-journald[1214]: Runtime Journal (/run/log/journal/49cdadf162224cebb40a3afbdaffdde5) is 6M, max 48.5M, 42.4M free. Nov 7 23:48:56.631334 systemd[1]: Queued start job for default target multi-user.target. Nov 7 23:48:56.651827 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 7 23:48:56.652319 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 7 23:48:56.882973 systemd[1]: Started systemd-journald.service - Journal Service. Nov 7 23:48:56.885688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 7 23:48:56.887323 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 7 23:48:56.887533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 7 23:48:56.891079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 7 23:48:56.891268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 7 23:48:56.892808 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 7 23:48:56.892994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 7 23:48:56.894556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 7 23:48:56.895732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 7 23:48:56.897316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 7 23:48:56.897517 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 7 23:48:56.899013 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 7 23:48:56.899204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 7 23:48:56.900910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 7 23:48:56.902894 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 7 23:48:56.905474 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 7 23:48:56.907459 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 7 23:48:56.917883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 7 23:48:56.924055 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 7 23:48:56.926076 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 7 23:48:56.929736 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 7 23:48:56.932256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 7 23:48:56.933604 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 7 23:48:56.933680 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 7 23:48:56.935867 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 7 23:48:56.937353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 7 23:48:56.939987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 7 23:48:56.942314 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 7 23:48:56.943698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 7 23:48:56.945294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 7 23:48:56.946735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 7 23:48:56.948854 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 7 23:48:56.951310 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 7 23:48:56.952126 systemd-journald[1214]: Time spent on flushing to /var/log/journal/49cdadf162224cebb40a3afbdaffdde5 is 11.751ms for 872 entries. Nov 7 23:48:56.952126 systemd-journald[1214]: System Journal (/var/log/journal/49cdadf162224cebb40a3afbdaffdde5) is 8M, max 163.5M, 155.5M free. Nov 7 23:48:57.108873 systemd-journald[1214]: Received client request to flush runtime journal. Nov 7 23:48:57.108937 kernel: loop1: detected capacity change from 0 to 119832 Nov 7 23:48:57.108956 kernel: loop2: detected capacity change from 0 to 100624 Nov 7 23:48:57.108982 kernel: loop3: detected capacity change from 0 to 200800 Nov 7 23:48:57.109001 kernel: loop4: detected capacity change from 0 to 119832 Nov 7 23:48:56.955455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 7 23:48:56.958807 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 7 23:48:56.960218 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 7 23:48:56.983626 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 7 23:48:56.983657 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 7 23:48:56.987755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 7 23:48:56.989937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 7 23:48:56.993009 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 7 23:48:57.079044 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 7 23:48:57.080604 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 7 23:48:57.083194 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 7 23:48:57.111794 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 7 23:48:57.115586 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 7 23:48:57.120697 kernel: loop5: detected capacity change from 0 to 100624 Nov 7 23:48:57.122430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 7 23:48:57.126827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 7 23:48:57.152539 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Nov 7 23:48:57.152941 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Nov 7 23:48:57.156830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 7 23:48:57.161580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 7 23:48:57.164651 kernel: loop6: detected capacity change from 0 to 200800 Nov 7 23:48:57.166345 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 7 23:48:57.170716 (sd-merge)[1279]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 7 23:48:57.173840 (sd-merge)[1279]: Merged extensions into '/usr'. Nov 7 23:48:57.177561 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Nov 7 23:48:57.177578 systemd[1]: Reloading... Nov 7 23:48:57.243660 zram_generator::config[1325]: No configuration found. Nov 7 23:48:57.274197 systemd-resolved[1284]: Positive Trust Anchors: Nov 7 23:48:57.274559 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 7 23:48:57.274617 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 7 23:48:57.274706 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 7 23:48:57.285743 systemd-resolved[1284]: Defaulting to hostname 'linux'. Nov 7 23:48:57.385421 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 7 23:48:57.385564 systemd[1]: Reloading finished in 207 ms. Nov 7 23:48:57.419585 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 7 23:48:57.420969 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 7 23:48:57.422380 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 7 23:48:57.425565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 7 23:48:57.445908 systemd[1]: Starting ensure-sysext.service... Nov 7 23:48:57.447872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 7 23:48:57.459037 systemd[1]: Reload requested from client PID 1355 ('systemctl') (unit ensure-sysext.service)... Nov 7 23:48:57.459058 systemd[1]: Reloading... Nov 7 23:48:57.473574 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 7 23:48:57.473605 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 7 23:48:57.473843 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 7 23:48:57.474036 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 7 23:48:57.474694 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 7 23:48:57.474883 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Nov 7 23:48:57.474933 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Nov 7 23:48:57.478773 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Nov 7 23:48:57.478787 systemd-tmpfiles[1356]: Skipping /boot Nov 7 23:48:57.485118 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Nov 7 23:48:57.485135 systemd-tmpfiles[1356]: Skipping /boot Nov 7 23:48:57.520834 zram_generator::config[1389]: No configuration found. Nov 7 23:48:57.650536 systemd[1]: Reloading finished in 191 ms. Nov 7 23:48:57.671463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 7 23:48:57.688821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 7 23:48:57.698094 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 7 23:48:57.700607 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 7 23:48:57.721715 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 7 23:48:57.724401 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 7 23:48:57.729898 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 7 23:48:57.733927 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 7 23:48:57.739190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 7 23:48:57.742520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 7 23:48:57.747204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 7 23:48:57.749945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 7 23:48:57.751800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 7 23:48:57.752013 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 7 23:48:57.755711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 7 23:48:57.755891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 7 23:48:57.756096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 7 23:48:57.758973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 7 23:48:57.762393 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 7 23:48:57.763494 systemd-udevd[1430]: Using default interface naming scheme 'v257'. Nov 7 23:48:57.763928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 7 23:48:57.764057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 7 23:48:57.768144 systemd[1]: Finished ensure-sysext.service. Nov 7 23:48:57.781829 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 7 23:48:57.784739 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 7 23:48:57.788444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 7 23:48:57.788743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 7 23:48:57.791609 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 7 23:48:57.795089 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 7 23:48:57.795309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 7 23:48:57.797239 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 7 23:48:57.798107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 7 23:48:57.820741 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 7 23:48:57.823261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 7 23:48:57.823302 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 7 23:48:57.832032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 7 23:48:57.832304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 7 23:48:57.837521 augenrules[1480]: No rules Nov 7 23:48:57.840946 systemd[1]: audit-rules.service: Deactivated successfully. Nov 7 23:48:57.841959 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 7 23:48:57.846749 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 7 23:48:57.850672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 7 23:48:57.855981 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 7 23:48:57.856105 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 7 23:48:57.885678 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 7 23:48:57.888248 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 7 23:48:57.914712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 7 23:48:57.929817 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 7 23:48:57.931336 systemd[1]: Reached target time-set.target - System Time Set. Nov 7 23:48:57.942201 systemd-networkd[1471]: lo: Link UP Nov 7 23:48:57.942508 systemd-networkd[1471]: lo: Gained carrier Nov 7 23:48:57.943429 systemd-networkd[1471]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:48:57.943432 systemd-networkd[1471]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 7 23:48:57.943443 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 7 23:48:57.945015 systemd[1]: Reached target network.target - Network. Nov 7 23:48:57.945749 systemd-networkd[1471]: eth0: Link UP Nov 7 23:48:57.946315 systemd-networkd[1471]: eth0: Gained carrier Nov 7 23:48:57.946411 systemd-networkd[1471]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:48:57.948115 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 7 23:48:57.951842 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 7 23:48:57.959741 systemd-networkd[1471]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 7 23:48:57.961369 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Nov 7 23:48:57.962167 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 7 23:48:57.962230 systemd-timesyncd[1446]: Initial clock synchronization to Fri 2025-11-07 23:48:57.848118 UTC. Nov 7 23:48:57.979675 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 7 23:48:58.041716 ldconfig[1424]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 7 23:48:58.052037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:48:58.085737 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 7 23:48:58.088605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 7 23:48:58.108794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:48:58.114697 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 7 23:48:58.116089 systemd[1]: Reached target sysinit.target - System Initialization. Nov 7 23:48:58.117503 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 7 23:48:58.118853 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 7 23:48:58.120259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 7 23:48:58.121442 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 7 23:48:58.122915 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 7 23:48:58.124141 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 7 23:48:58.124176 systemd[1]: Reached target paths.target - Path Units. Nov 7 23:48:58.125072 systemd[1]: Reached target timers.target - Timer Units. Nov 7 23:48:58.127018 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 7 23:48:58.129545 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 7 23:48:58.132520 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 7 23:48:58.134097 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 7 23:48:58.135389 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 7 23:48:58.141401 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 7 23:48:58.142908 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 7 23:48:58.144765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 7 23:48:58.145897 systemd[1]: Reached target sockets.target - Socket Units. Nov 7 23:48:58.146822 systemd[1]: Reached target basic.target - Basic System. Nov 7 23:48:58.147745 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 7 23:48:58.147776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 7 23:48:58.148869 systemd[1]: Starting containerd.service - containerd container runtime... Nov 7 23:48:58.150959 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 7 23:48:58.152948 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 7 23:48:58.155101 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 7 23:48:58.157218 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 7 23:48:58.158333 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 7 23:48:58.159433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 7 23:48:58.161401 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 7 23:48:58.164032 jq[1533]: false Nov 7 23:48:58.164818 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 7 23:48:58.167053 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 7 23:48:58.170867 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 7 23:48:58.171916 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 7 23:48:58.172406 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 7 23:48:58.173022 systemd[1]: Starting update-engine.service - Update Engine... Nov 7 23:48:58.174863 extend-filesystems[1534]: Found /dev/vda6 Nov 7 23:48:58.176019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 7 23:48:58.178972 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 7 23:48:58.182297 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 7 23:48:58.182494 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 7 23:48:58.183389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 7 23:48:58.184423 extend-filesystems[1534]: Found /dev/vda9 Nov 7 23:48:58.184070 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 7 23:48:58.191741 jq[1546]: true Nov 7 23:48:58.194343 extend-filesystems[1534]: Checking size of /dev/vda9 Nov 7 23:48:58.192963 systemd[1]: motdgen.service: Deactivated successfully. Nov 7 23:48:58.193228 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 7 23:48:58.205306 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 7 23:48:58.209433 jq[1562]: true Nov 7 23:48:58.210653 extend-filesystems[1534]: Resized partition /dev/vda9 Nov 7 23:48:58.211905 update_engine[1544]: I20251107 23:48:58.211356 1544 main.cc:92] Flatcar Update Engine starting Nov 7 23:48:58.212377 tar[1551]: linux-arm64/LICENSE Nov 7 23:48:58.212849 tar[1551]: linux-arm64/helm Nov 7 23:48:58.214441 extend-filesystems[1578]: resize2fs 1.47.3 (8-Jul-2025) Nov 7 23:48:58.228824 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 7 23:48:58.236727 dbus-daemon[1531]: [system] SELinux support is enabled Nov 7 23:48:58.237293 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 7 23:48:58.240587 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 7 23:48:58.240615 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 7 23:48:58.243078 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 7 23:48:58.243103 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 7 23:48:58.244777 update_engine[1544]: I20251107 23:48:58.244380 1544 update_check_scheduler.cc:74] Next update check in 7m53s Nov 7 23:48:58.245882 systemd[1]: Started update-engine.service - Update Engine. Nov 7 23:48:58.254980 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 7 23:48:58.261008 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 7 23:48:58.280854 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 7 23:48:58.280854 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 7 23:48:58.280854 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 7 23:48:58.286811 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Nov 7 23:48:58.285076 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 7 23:48:58.285296 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 7 23:48:58.291270 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Nov 7 23:48:58.294864 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 7 23:48:58.303053 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 7 23:48:58.311409 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (Power Button) Nov 7 23:48:58.312825 systemd-logind[1543]: New seat seat0. Nov 7 23:48:58.313912 systemd[1]: Started systemd-logind.service - User Login Management. Nov 7 23:48:58.333087 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 7 23:48:58.393577 containerd[1559]: time="2025-11-07T23:48:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 7 23:48:58.397640 containerd[1559]: time="2025-11-07T23:48:58.394247829Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 7 23:48:58.404304 containerd[1559]: time="2025-11-07T23:48:58.404247524Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.723µs" Nov 7 23:48:58.404304 containerd[1559]: time="2025-11-07T23:48:58.404291965Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 7 23:48:58.404397 containerd[1559]: time="2025-11-07T23:48:58.404313133Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 7 23:48:58.404498 containerd[1559]: time="2025-11-07T23:48:58.404467267Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 7 23:48:58.404498 containerd[1559]: time="2025-11-07T23:48:58.404489786Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 7 23:48:58.404546 containerd[1559]: time="2025-11-07T23:48:58.404513932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404579 containerd[1559]: time="2025-11-07T23:48:58.404563378Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404606 containerd[1559]: time="2025-11-07T23:48:58.404577556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404842 containerd[1559]: time="2025-11-07T23:48:58.404809532Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404842 containerd[1559]: time="2025-11-07T23:48:58.404832010Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404882 containerd[1559]: time="2025-11-07T23:48:58.404844719Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404882 containerd[1559]: time="2025-11-07T23:48:58.404852980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 7 23:48:58.404940 containerd[1559]: time="2025-11-07T23:48:58.404925857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 7 23:48:58.405155 containerd[1559]: time="2025-11-07T23:48:58.405123638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 7 23:48:58.405178 containerd[1559]: time="2025-11-07T23:48:58.405162837Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 7 23:48:58.405178 containerd[1559]: time="2025-11-07T23:48:58.405174116Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 7 23:48:58.405224 containerd[1559]: time="2025-11-07T23:48:58.405207000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 7 23:48:58.405455 containerd[1559]: time="2025-11-07T23:48:58.405439055Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 7 23:48:58.405519 containerd[1559]: time="2025-11-07T23:48:58.405503195Z" level=info msg="metadata content store policy set" policy=shared Nov 7 23:48:58.435902 containerd[1559]: time="2025-11-07T23:48:58.435832787Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 7 23:48:58.436335 containerd[1559]: time="2025-11-07T23:48:58.436292648Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 7 23:48:58.436432 containerd[1559]: time="2025-11-07T23:48:58.436409331Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 7 23:48:58.436555 containerd[1559]: time="2025-11-07T23:48:58.436514615Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 7 23:48:58.436585 containerd[1559]: time="2025-11-07T23:48:58.436563862Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 7 23:48:58.436616 containerd[1559]: time="2025-11-07T23:48:58.436591464Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 7 23:48:58.437709 containerd[1559]: time="2025-11-07T23:48:58.437661745Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 7 23:48:58.438162 containerd[1559]: time="2025-11-07T23:48:58.437953333Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 7 23:48:58.438162 containerd[1559]: time="2025-11-07T23:48:58.437994557Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 7 23:48:58.438162 containerd[1559]: time="2025-11-07T23:48:58.438028235Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 7 23:48:58.438162 containerd[1559]: time="2025-11-07T23:48:58.438040388Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 7 23:48:58.438162 containerd[1559]: time="2025-11-07T23:48:58.438056512Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 7 23:48:58.438476 containerd[1559]: time="2025-11-07T23:48:58.438432733Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 7 23:48:58.438512 containerd[1559]: time="2025-11-07T23:48:58.438476301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 7 23:48:58.438531 containerd[1559]: time="2025-11-07T23:48:58.438513990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 7 23:48:58.438549 containerd[1559]: time="2025-11-07T23:48:58.438529916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 7 23:48:58.438549 containerd[1559]: time="2025-11-07T23:48:58.438542466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 7 23:48:58.438580 containerd[1559]: time="2025-11-07T23:48:58.438552712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 7 23:48:58.438637 containerd[1559]: time="2025-11-07T23:48:58.438615859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 7 23:48:58.438673 containerd[1559]: time="2025-11-07T23:48:58.438658672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 7 23:48:58.438693 containerd[1559]: time="2025-11-07T23:48:58.438674518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 7 23:48:58.438693 containerd[1559]: time="2025-11-07T23:48:58.438686989Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 7 23:48:58.438891 containerd[1559]: time="2025-11-07T23:48:58.438871704Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 7 23:48:58.439306 containerd[1559]: time="2025-11-07T23:48:58.439168375Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 7 23:48:58.439306 containerd[1559]: time="2025-11-07T23:48:58.439302215Z" level=info msg="Start snapshots syncer" Nov 7 23:48:58.439373 containerd[1559]: time="2025-11-07T23:48:58.439343042Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 7 23:48:58.439995 containerd[1559]: time="2025-11-07T23:48:58.439868432Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 7 23:48:58.440118 containerd[1559]: time="2025-11-07T23:48:58.439997427Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 7 23:48:58.440118 containerd[1559]: time="2025-11-07T23:48:58.440080789Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 7 23:48:58.440476 containerd[1559]: time="2025-11-07T23:48:58.440446803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 7 23:48:58.440553 containerd[1559]: time="2025-11-07T23:48:58.440535248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 7 23:48:58.440575 containerd[1559]: time="2025-11-07T23:48:58.440555741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 7 23:48:58.440575 containerd[1559]: time="2025-11-07T23:48:58.440569562Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 7 23:48:58.440615 containerd[1559]: time="2025-11-07T23:48:58.440586044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 7 23:48:58.440699 containerd[1559]: time="2025-11-07T23:48:58.440682472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 7 23:48:58.440730 containerd[1559]: time="2025-11-07T23:48:58.440705467Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 7 23:48:58.440759 containerd[1559]: time="2025-11-07T23:48:58.440736802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 7 23:48:58.440779 containerd[1559]: time="2025-11-07T23:48:58.440765317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 7 23:48:58.440840 containerd[1559]: time="2025-11-07T23:48:58.440782713Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 7 23:48:58.440884 containerd[1559]: time="2025-11-07T23:48:58.440869133Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 7 23:48:58.440984 containerd[1559]: time="2025-11-07T23:48:58.440892167Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 7 23:48:58.440984 containerd[1559]: time="2025-11-07T23:48:58.440955831Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 7 23:48:58.441037 containerd[1559]: time="2025-11-07T23:48:58.441016078Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 7 23:48:58.441061 containerd[1559]: time="2025-11-07T23:48:58.441035697Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 7 23:48:58.441061 containerd[1559]: time="2025-11-07T23:48:58.441048923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 7 23:48:58.441094 containerd[1559]: time="2025-11-07T23:48:58.441061711Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 7 23:48:58.441446 containerd[1559]: time="2025-11-07T23:48:58.441424984Z" level=info msg="runtime interface created" Nov 7 23:48:58.441470 containerd[1559]: time="2025-11-07T23:48:58.441452785Z" level=info msg="created NRI interface" Nov 7 23:48:58.441470 containerd[1559]: time="2025-11-07T23:48:58.441466288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 7 23:48:58.441504 containerd[1559]: time="2025-11-07T23:48:58.441481658Z" level=info msg="Connect containerd service" Nov 7 23:48:58.441526 containerd[1559]: time="2025-11-07T23:48:58.441510332Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 7 23:48:58.443070 containerd[1559]: time="2025-11-07T23:48:58.443026137Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 7 23:48:58.511418 containerd[1559]: time="2025-11-07T23:48:58.511224827Z" level=info msg="Start subscribing containerd event" Nov 7 23:48:58.511418 containerd[1559]: time="2025-11-07T23:48:58.511308030Z" level=info msg="Start recovering state" Nov 7 23:48:58.511615 containerd[1559]: time="2025-11-07T23:48:58.511587307Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 7 23:48:58.511734 containerd[1559]: time="2025-11-07T23:48:58.511718168Z" level=info msg="Start event monitor" Nov 7 23:48:58.511798 containerd[1559]: time="2025-11-07T23:48:58.511787351Z" level=info msg="Start cni network conf syncer for default" Nov 7 23:48:58.511888 containerd[1559]: time="2025-11-07T23:48:58.511876353Z" level=info msg="Start streaming server" Nov 7 23:48:58.511966 containerd[1559]: time="2025-11-07T23:48:58.511955465Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 7 23:48:58.512147 containerd[1559]: time="2025-11-07T23:48:58.512132952Z" level=info msg="runtime interface starting up..." Nov 7 23:48:58.512642 containerd[1559]: time="2025-11-07T23:48:58.512206345Z" level=info msg="starting plugins..." Nov 7 23:48:58.512642 containerd[1559]: time="2025-11-07T23:48:58.512231127Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 7 23:48:58.512642 containerd[1559]: time="2025-11-07T23:48:58.511818369Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 7 23:48:58.512642 containerd[1559]: time="2025-11-07T23:48:58.512395587Z" level=info msg="containerd successfully booted in 0.119197s" Nov 7 23:48:58.512526 systemd[1]: Started containerd.service - containerd container runtime. Nov 7 23:48:58.550207 tar[1551]: linux-arm64/README.md Nov 7 23:48:58.575706 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 7 23:48:58.870937 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 7 23:48:58.893702 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 7 23:48:58.896441 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 7 23:48:58.923410 systemd[1]: issuegen.service: Deactivated successfully. Nov 7 23:48:58.923628 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 7 23:48:58.926681 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 7 23:48:58.953275 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 7 23:48:58.956192 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 7 23:48:58.958456 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 7 23:48:58.959846 systemd[1]: Reached target getty.target - Login Prompts. Nov 7 23:48:59.717862 systemd-networkd[1471]: eth0: Gained IPv6LL Nov 7 23:48:59.720673 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 7 23:48:59.722421 systemd[1]: Reached target network-online.target - Network is Online. Nov 7 23:48:59.724975 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 7 23:48:59.727508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:48:59.740174 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 7 23:48:59.761929 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 7 23:48:59.763611 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 7 23:48:59.763859 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 7 23:48:59.766068 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 7 23:49:00.310352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:00.311986 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 7 23:49:00.314070 systemd[1]: Startup finished in 1.188s (kernel) + 5.060s (initrd) + 4.169s (userspace) = 10.419s. Nov 7 23:49:00.315081 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 23:49:00.648409 kubelet[1669]: E1107 23:49:00.648298 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 23:49:00.650322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 23:49:00.650449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 23:49:00.653745 systemd[1]: kubelet.service: Consumed 702ms CPU time, 248.5M memory peak. Nov 7 23:49:02.247359 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 7 23:49:02.248495 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:49570.service - OpenSSH per-connection server daemon (10.0.0.1:49570). Nov 7 23:49:02.329610 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 49570 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:02.331517 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:02.342214 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 7 23:49:02.344049 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 7 23:49:02.355371 systemd-logind[1543]: New session 1 of user core. Nov 7 23:49:02.369482 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 7 23:49:02.373757 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 7 23:49:02.391029 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 7 23:49:02.393338 systemd-logind[1543]: New session c1 of user core. Nov 7 23:49:02.506554 systemd[1687]: Queued start job for default target default.target. Nov 7 23:49:02.524721 systemd[1687]: Created slice app.slice - User Application Slice. Nov 7 23:49:02.524749 systemd[1687]: Reached target paths.target - Paths. Nov 7 23:49:02.524787 systemd[1687]: Reached target timers.target - Timers. Nov 7 23:49:02.525955 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 7 23:49:02.538569 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 7 23:49:02.538799 systemd[1687]: Reached target sockets.target - Sockets. Nov 7 23:49:02.538935 systemd[1687]: Reached target basic.target - Basic System. Nov 7 23:49:02.539051 systemd[1687]: Reached target default.target - Main User Target. Nov 7 23:49:02.539085 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 7 23:49:02.539182 systemd[1687]: Startup finished in 139ms. Nov 7 23:49:02.540262 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 7 23:49:02.622861 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:49584.service - OpenSSH per-connection server daemon (10.0.0.1:49584). Nov 7 23:49:02.691563 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 49584 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:02.693453 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:02.697828 systemd-logind[1543]: New session 2 of user core. Nov 7 23:49:02.719874 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 7 23:49:02.790798 sshd[1701]: Connection closed by 10.0.0.1 port 49584 Nov 7 23:49:02.792176 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Nov 7 23:49:02.809804 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:49584.service: Deactivated successfully. Nov 7 23:49:02.811718 systemd[1]: session-2.scope: Deactivated successfully. Nov 7 23:49:02.812582 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Nov 7 23:49:02.815837 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:49592.service - OpenSSH per-connection server daemon (10.0.0.1:49592). Nov 7 23:49:02.817530 systemd-logind[1543]: Removed session 2. Nov 7 23:49:02.883833 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 49592 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:02.883855 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:02.888714 systemd-logind[1543]: New session 3 of user core. Nov 7 23:49:02.896819 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 7 23:49:02.950848 sshd[1710]: Connection closed by 10.0.0.1 port 49592 Nov 7 23:49:02.951197 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Nov 7 23:49:02.966882 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:49592.service: Deactivated successfully. Nov 7 23:49:02.968741 systemd[1]: session-3.scope: Deactivated successfully. Nov 7 23:49:02.971959 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Nov 7 23:49:02.973568 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:49604.service - OpenSSH per-connection server daemon (10.0.0.1:49604). Nov 7 23:49:02.974783 systemd-logind[1543]: Removed session 3. Nov 7 23:49:03.046690 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 49604 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:03.048661 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:03.053787 systemd-logind[1543]: New session 4 of user core. Nov 7 23:49:03.064880 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 7 23:49:03.123475 sshd[1720]: Connection closed by 10.0.0.1 port 49604 Nov 7 23:49:03.124007 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Nov 7 23:49:03.146084 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:49604.service: Deactivated successfully. Nov 7 23:49:03.149957 systemd[1]: session-4.scope: Deactivated successfully. Nov 7 23:49:03.153806 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Nov 7 23:49:03.155657 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). Nov 7 23:49:03.159157 systemd-logind[1543]: Removed session 4. Nov 7 23:49:03.224861 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:03.226162 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:03.230575 systemd-logind[1543]: New session 5 of user core. Nov 7 23:49:03.241830 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 7 23:49:03.300890 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 7 23:49:03.301144 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:49:03.317605 sudo[1730]: pam_unix(sudo:session): session closed for user root Nov 7 23:49:03.319644 sshd[1729]: Connection closed by 10.0.0.1 port 49612 Nov 7 23:49:03.320321 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Nov 7 23:49:03.334944 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:49612.service: Deactivated successfully. Nov 7 23:49:03.338835 systemd[1]: session-5.scope: Deactivated successfully. Nov 7 23:49:03.341288 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Nov 7 23:49:03.344041 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:49614.service - OpenSSH per-connection server daemon (10.0.0.1:49614). Nov 7 23:49:03.344896 systemd-logind[1543]: Removed session 5. Nov 7 23:49:03.417727 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 49614 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:03.419396 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:03.424879 systemd-logind[1543]: New session 6 of user core. Nov 7 23:49:03.444847 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 7 23:49:03.497261 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 7 23:49:03.497534 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:49:03.506006 sudo[1741]: pam_unix(sudo:session): session closed for user root Nov 7 23:49:03.512057 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 7 23:49:03.512604 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:49:03.523216 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 7 23:49:03.560199 augenrules[1763]: No rules Nov 7 23:49:03.560858 systemd[1]: audit-rules.service: Deactivated successfully. Nov 7 23:49:03.561518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 7 23:49:03.562967 sudo[1740]: pam_unix(sudo:session): session closed for user root Nov 7 23:49:03.564840 sshd[1739]: Connection closed by 10.0.0.1 port 49614 Nov 7 23:49:03.565212 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 7 23:49:03.578166 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:49614.service: Deactivated successfully. Nov 7 23:49:03.580289 systemd[1]: session-6.scope: Deactivated successfully. Nov 7 23:49:03.581150 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Nov 7 23:49:03.585235 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). Nov 7 23:49:03.585983 systemd-logind[1543]: Removed session 6. Nov 7 23:49:03.654006 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:49:03.655287 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:49:03.660210 systemd-logind[1543]: New session 7 of user core. Nov 7 23:49:03.667874 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 7 23:49:03.720122 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 7 23:49:03.720382 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:49:04.018051 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 7 23:49:04.034969 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 7 23:49:04.251565 dockerd[1796]: time="2025-11-07T23:49:04.251488640Z" level=info msg="Starting up" Nov 7 23:49:04.253640 dockerd[1796]: time="2025-11-07T23:49:04.252833941Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 7 23:49:04.267742 dockerd[1796]: time="2025-11-07T23:49:04.267697106Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 7 23:49:04.448407 dockerd[1796]: time="2025-11-07T23:49:04.448294604Z" level=info msg="Loading containers: start." Nov 7 23:49:04.457651 kernel: Initializing XFRM netlink socket Nov 7 23:49:04.670015 systemd-networkd[1471]: docker0: Link UP Nov 7 23:49:04.677177 dockerd[1796]: time="2025-11-07T23:49:04.677133329Z" level=info msg="Loading containers: done." Nov 7 23:49:04.728550 dockerd[1796]: time="2025-11-07T23:49:04.728383931Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 7 23:49:04.728550 dockerd[1796]: time="2025-11-07T23:49:04.728510874Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 7 23:49:04.728727 dockerd[1796]: time="2025-11-07T23:49:04.728591721Z" level=info msg="Initializing buildkit" Nov 7 23:49:04.751657 dockerd[1796]: time="2025-11-07T23:49:04.751602171Z" level=info msg="Completed buildkit initialization" Nov 7 23:49:04.756268 dockerd[1796]: time="2025-11-07T23:49:04.756236448Z" level=info msg="Daemon has completed initialization" Nov 7 23:49:04.756893 dockerd[1796]: time="2025-11-07T23:49:04.756320719Z" level=info msg="API listen on /run/docker.sock" Nov 7 23:49:04.756541 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 7 23:49:05.225158 containerd[1559]: time="2025-11-07T23:49:05.225051955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 7 23:49:05.816559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637049259.mount: Deactivated successfully. Nov 7 23:49:06.920067 containerd[1559]: time="2025-11-07T23:49:06.919999222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:06.921562 containerd[1559]: time="2025-11-07T23:49:06.921266771Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Nov 7 23:49:06.922502 containerd[1559]: time="2025-11-07T23:49:06.922471907Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:06.926459 containerd[1559]: time="2025-11-07T23:49:06.926407306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:06.927637 containerd[1559]: time="2025-11-07T23:49:06.927591053Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.702493263s" Nov 7 23:49:06.927740 containerd[1559]: time="2025-11-07T23:49:06.927643748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 7 23:49:06.928583 containerd[1559]: time="2025-11-07T23:49:06.928416171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 7 23:49:08.036478 containerd[1559]: time="2025-11-07T23:49:08.036424763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:08.038042 containerd[1559]: time="2025-11-07T23:49:08.038010975Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Nov 7 23:49:08.039164 containerd[1559]: time="2025-11-07T23:49:08.039139901Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:08.042247 containerd[1559]: time="2025-11-07T23:49:08.042211104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:08.044005 containerd[1559]: time="2025-11-07T23:49:08.043880364Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.115431727s" Nov 7 23:49:08.044005 containerd[1559]: time="2025-11-07T23:49:08.043919736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 7 23:49:08.044505 containerd[1559]: time="2025-11-07T23:49:08.044374511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 7 23:49:08.904160 containerd[1559]: time="2025-11-07T23:49:08.904055117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:08.904997 containerd[1559]: time="2025-11-07T23:49:08.904964348Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Nov 7 23:49:08.905909 containerd[1559]: time="2025-11-07T23:49:08.905858753Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:08.910578 containerd[1559]: time="2025-11-07T23:49:08.910535974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:08.912290 containerd[1559]: time="2025-11-07T23:49:08.912239983Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 867.722289ms" Nov 7 23:49:08.912290 containerd[1559]: time="2025-11-07T23:49:08.912275012Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 7 23:49:08.913035 containerd[1559]: time="2025-11-07T23:49:08.912845712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 7 23:49:09.909223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533978355.mount: Deactivated successfully. Nov 7 23:49:10.081160 containerd[1559]: time="2025-11-07T23:49:10.081101220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:10.081817 containerd[1559]: time="2025-11-07T23:49:10.081790589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Nov 7 23:49:10.083055 containerd[1559]: time="2025-11-07T23:49:10.083008417Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:10.087195 containerd[1559]: time="2025-11-07T23:49:10.086919572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:10.087760 containerd[1559]: time="2025-11-07T23:49:10.087729703Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.174845252s" Nov 7 23:49:10.087798 containerd[1559]: time="2025-11-07T23:49:10.087766860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 7 23:49:10.088398 containerd[1559]: time="2025-11-07T23:49:10.088364292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 7 23:49:10.601230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699092656.mount: Deactivated successfully. Nov 7 23:49:10.900822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 7 23:49:10.902165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:49:11.039625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:11.043843 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 23:49:11.197535 kubelet[2147]: E1107 23:49:11.197406 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 23:49:11.200247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 23:49:11.200364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 23:49:11.201777 systemd[1]: kubelet.service: Consumed 152ms CPU time, 108.2M memory peak. Nov 7 23:49:11.547486 containerd[1559]: time="2025-11-07T23:49:11.547129613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:11.548245 containerd[1559]: time="2025-11-07T23:49:11.548201822Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 7 23:49:11.550001 containerd[1559]: time="2025-11-07T23:49:11.549953134Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:11.553394 containerd[1559]: time="2025-11-07T23:49:11.553363364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:11.554460 containerd[1559]: time="2025-11-07T23:49:11.554427079Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.46601857s" Nov 7 23:49:11.554522 containerd[1559]: time="2025-11-07T23:49:11.554465760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 7 23:49:11.555488 containerd[1559]: time="2025-11-07T23:49:11.555448804Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 7 23:49:12.093795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684746011.mount: Deactivated successfully. Nov 7 23:49:12.109062 containerd[1559]: time="2025-11-07T23:49:12.109000917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:12.109540 containerd[1559]: time="2025-11-07T23:49:12.109506893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 7 23:49:12.111996 containerd[1559]: time="2025-11-07T23:49:12.111954688Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:12.115242 containerd[1559]: time="2025-11-07T23:49:12.115177280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:12.116014 containerd[1559]: time="2025-11-07T23:49:12.115834299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 560.350481ms" Nov 7 23:49:12.116014 containerd[1559]: time="2025-11-07T23:49:12.115865528Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 7 23:49:12.116477 containerd[1559]: time="2025-11-07T23:49:12.116442179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 7 23:49:15.413226 containerd[1559]: time="2025-11-07T23:49:15.413145967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:15.415085 containerd[1559]: time="2025-11-07T23:49:15.415044242Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Nov 7 23:49:15.415949 containerd[1559]: time="2025-11-07T23:49:15.415867280Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:15.423718 containerd[1559]: time="2025-11-07T23:49:15.423671797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:15.425786 containerd[1559]: time="2025-11-07T23:49:15.425724105Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.308693903s" Nov 7 23:49:15.425786 containerd[1559]: time="2025-11-07T23:49:15.425777737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 7 23:49:20.728642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:20.728805 systemd[1]: kubelet.service: Consumed 152ms CPU time, 108.2M memory peak. Nov 7 23:49:20.730842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:49:20.752976 systemd[1]: Reload requested from client PID 2235 ('systemctl') (unit session-7.scope)... Nov 7 23:49:20.753108 systemd[1]: Reloading... Nov 7 23:49:20.839672 zram_generator::config[2283]: No configuration found. Nov 7 23:49:21.052759 systemd[1]: Reloading finished in 299 ms. Nov 7 23:49:21.104185 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 7 23:49:21.104271 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 7 23:49:21.105730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:21.105788 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.1M memory peak. Nov 7 23:49:21.107359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:49:21.235393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:21.239885 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 7 23:49:21.273626 kubelet[2325]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 7 23:49:21.273626 kubelet[2325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 23:49:21.274406 kubelet[2325]: I1107 23:49:21.274141 2325 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 7 23:49:22.617734 kubelet[2325]: I1107 23:49:22.617684 2325 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 7 23:49:22.617734 kubelet[2325]: I1107 23:49:22.617723 2325 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 7 23:49:22.620080 kubelet[2325]: I1107 23:49:22.620003 2325 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 7 23:49:22.620080 kubelet[2325]: I1107 23:49:22.620024 2325 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 7 23:49:22.620787 kubelet[2325]: I1107 23:49:22.620752 2325 server.go:956] "Client rotation is on, will bootstrap in background" Nov 7 23:49:22.761209 kubelet[2325]: I1107 23:49:22.761136 2325 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 23:49:22.761799 kubelet[2325]: E1107 23:49:22.761754 2325 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 7 23:49:22.769164 kubelet[2325]: I1107 23:49:22.769128 2325 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 7 23:49:22.771513 kubelet[2325]: I1107 23:49:22.771446 2325 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 7 23:49:22.771713 kubelet[2325]: I1107 23:49:22.771665 2325 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 7 23:49:22.771879 kubelet[2325]: I1107 23:49:22.771698 2325 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 7 23:49:22.771879 kubelet[2325]: I1107 23:49:22.771863 2325 topology_manager.go:138] "Creating topology manager with none policy" Nov 7 23:49:22.771879 kubelet[2325]: I1107 23:49:22.771872 2325 container_manager_linux.go:306] "Creating device plugin manager" Nov 7 23:49:22.772245 kubelet[2325]: I1107 23:49:22.771978 2325 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 7 23:49:22.776242 kubelet[2325]: I1107 23:49:22.776195 2325 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:49:22.777616 kubelet[2325]: I1107 23:49:22.777555 2325 kubelet.go:475] "Attempting to sync node with API server" Nov 7 23:49:22.778778 kubelet[2325]: E1107 23:49:22.778231 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 7 23:49:22.778778 kubelet[2325]: I1107 23:49:22.778304 2325 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 7 23:49:22.778870 kubelet[2325]: I1107 23:49:22.778847 2325 kubelet.go:387] "Adding apiserver pod source" Nov 7 23:49:22.778870 kubelet[2325]: I1107 23:49:22.778869 2325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 7 23:49:22.780101 kubelet[2325]: E1107 23:49:22.780049 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 7 23:49:22.782986 kubelet[2325]: I1107 23:49:22.782882 2325 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 7 23:49:22.784069 kubelet[2325]: I1107 23:49:22.783614 2325 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 7 23:49:22.784069 kubelet[2325]: I1107 23:49:22.783666 2325 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 7 23:49:22.784069 kubelet[2325]: W1107 23:49:22.783707 2325 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 7 23:49:22.788028 kubelet[2325]: I1107 23:49:22.787995 2325 server.go:1262] "Started kubelet" Nov 7 23:49:22.788353 kubelet[2325]: I1107 23:49:22.788320 2325 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 7 23:49:22.790084 kubelet[2325]: I1107 23:49:22.789939 2325 server.go:310] "Adding debug handlers to kubelet server" Nov 7 23:49:22.795470 kubelet[2325]: I1107 23:49:22.794847 2325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 7 23:49:22.795470 kubelet[2325]: I1107 23:49:22.791899 2325 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 7 23:49:22.795470 kubelet[2325]: I1107 23:49:22.795214 2325 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 7 23:49:22.795470 kubelet[2325]: I1107 23:49:22.795387 2325 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 7 23:49:22.796385 kubelet[2325]: I1107 23:49:22.796351 2325 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 7 23:49:22.796927 kubelet[2325]: E1107 23:49:22.796902 2325 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:49:22.796977 kubelet[2325]: I1107 23:49:22.796939 2325 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 7 23:49:22.797141 kubelet[2325]: I1107 23:49:22.797129 2325 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 7 23:49:22.797218 kubelet[2325]: I1107 23:49:22.797198 2325 reconciler.go:29] "Reconciler: start to sync state" Nov 7 23:49:22.797728 kubelet[2325]: E1107 23:49:22.797701 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 7 23:49:22.798281 kubelet[2325]: I1107 23:49:22.798258 2325 factory.go:223] Registration of the systemd container factory successfully Nov 7 23:49:22.798463 kubelet[2325]: I1107 23:49:22.798443 2325 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 7 23:49:22.798861 kubelet[2325]: E1107 23:49:22.798799 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Nov 7 23:49:22.798988 kubelet[2325]: E1107 23:49:22.798962 2325 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 7 23:49:22.799205 kubelet[2325]: E1107 23:49:22.795678 2325 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875de66ea83f406 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-07 23:49:22.78796391 +0000 UTC m=+1.545268544,LastTimestamp:2025-11-07 23:49:22.78796391 +0000 UTC m=+1.545268544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 7 23:49:22.799714 kubelet[2325]: I1107 23:49:22.799686 2325 factory.go:223] Registration of the containerd container factory successfully Nov 7 23:49:22.803386 kubelet[2325]: I1107 23:49:22.803341 2325 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 7 23:49:22.813097 kubelet[2325]: I1107 23:49:22.813055 2325 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 7 23:49:22.813097 kubelet[2325]: I1107 23:49:22.813078 2325 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 7 23:49:22.813097 kubelet[2325]: I1107 23:49:22.813097 2325 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:49:22.815292 kubelet[2325]: I1107 23:49:22.815247 2325 policy_none.go:49] "None policy: Start" Nov 7 23:49:22.815292 kubelet[2325]: I1107 23:49:22.815281 2325 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 7 23:49:22.815292 kubelet[2325]: I1107 23:49:22.815293 2325 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 7 23:49:22.819388 kubelet[2325]: I1107 23:49:22.818214 2325 policy_none.go:47] "Start" Nov 7 23:49:22.822815 kubelet[2325]: I1107 23:49:22.822225 2325 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 7 23:49:22.822815 kubelet[2325]: I1107 23:49:22.822248 2325 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 7 23:49:22.822815 kubelet[2325]: I1107 23:49:22.822270 2325 kubelet.go:2427] "Starting kubelet main sync loop" Nov 7 23:49:22.822815 kubelet[2325]: E1107 23:49:22.822308 2325 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 7 23:49:22.823695 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 7 23:49:22.824757 kubelet[2325]: E1107 23:49:22.824532 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 7 23:49:22.835306 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 7 23:49:22.838150 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 7 23:49:22.851544 kubelet[2325]: E1107 23:49:22.851495 2325 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 7 23:49:22.851947 kubelet[2325]: I1107 23:49:22.851727 2325 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 7 23:49:22.851947 kubelet[2325]: I1107 23:49:22.851743 2325 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 7 23:49:22.852329 kubelet[2325]: I1107 23:49:22.852065 2325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 7 23:49:22.852994 kubelet[2325]: E1107 23:49:22.852977 2325 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 7 23:49:22.853099 kubelet[2325]: E1107 23:49:22.853087 2325 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 7 23:49:22.933026 systemd[1]: Created slice kubepods-burstable-pod9efcadec264390c6e6393f2289762437.slice - libcontainer container kubepods-burstable-pod9efcadec264390c6e6393f2289762437.slice. Nov 7 23:49:22.953371 kubelet[2325]: I1107 23:49:22.953316 2325 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:49:22.953823 kubelet[2325]: E1107 23:49:22.953797 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 7 23:49:22.962604 kubelet[2325]: E1107 23:49:22.962571 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:49:22.964327 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 7 23:49:22.981899 kubelet[2325]: E1107 23:49:22.981855 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:49:22.984374 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 7 23:49:22.986479 kubelet[2325]: E1107 23:49:22.986457 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:49:22.998672 kubelet[2325]: I1107 23:49:22.998623 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:22.998900 kubelet[2325]: I1107 23:49:22.998790 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:22.998900 kubelet[2325]: I1107 23:49:22.998812 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9efcadec264390c6e6393f2289762437-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9efcadec264390c6e6393f2289762437\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:22.998900 kubelet[2325]: I1107 23:49:22.998832 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:22.998900 kubelet[2325]: I1107 23:49:22.998846 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:22.999201 kubelet[2325]: I1107 23:49:22.999166 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:22.999240 kubelet[2325]: I1107 23:49:22.999214 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:22.999240 kubelet[2325]: E1107 23:49:22.999219 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Nov 7 23:49:22.999285 kubelet[2325]: I1107 23:49:22.999233 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9efcadec264390c6e6393f2289762437-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9efcadec264390c6e6393f2289762437\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:22.999285 kubelet[2325]: I1107 23:49:22.999274 2325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9efcadec264390c6e6393f2289762437-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9efcadec264390c6e6393f2289762437\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:23.156061 kubelet[2325]: I1107 23:49:23.156028 2325 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:49:23.156476 kubelet[2325]: E1107 23:49:23.156440 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 7 23:49:23.266438 kubelet[2325]: E1107 23:49:23.266015 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:23.267098 containerd[1559]: time="2025-11-07T23:49:23.267064079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9efcadec264390c6e6393f2289762437,Namespace:kube-system,Attempt:0,}" Nov 7 23:49:23.284836 kubelet[2325]: E1107 23:49:23.284792 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:23.285265 containerd[1559]: time="2025-11-07T23:49:23.285232501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 7 23:49:23.289309 kubelet[2325]: E1107 23:49:23.289249 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:23.289743 containerd[1559]: time="2025-11-07T23:49:23.289710458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 7 23:49:23.400185 kubelet[2325]: E1107 23:49:23.400116 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Nov 7 23:49:23.558339 kubelet[2325]: I1107 23:49:23.558247 2325 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:49:23.558820 kubelet[2325]: E1107 23:49:23.558776 2325 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 7 23:49:23.653104 kubelet[2325]: E1107 23:49:23.653005 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 7 23:49:23.709804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274577937.mount: Deactivated successfully. Nov 7 23:49:23.715531 containerd[1559]: time="2025-11-07T23:49:23.715401341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:49:23.717045 containerd[1559]: time="2025-11-07T23:49:23.716994238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 7 23:49:23.719436 containerd[1559]: time="2025-11-07T23:49:23.719166990Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:49:23.720692 containerd[1559]: time="2025-11-07T23:49:23.720654796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:49:23.721445 containerd[1559]: time="2025-11-07T23:49:23.721420428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:49:23.722219 containerd[1559]: time="2025-11-07T23:49:23.722189695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 7 23:49:23.722557 containerd[1559]: time="2025-11-07T23:49:23.722535883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 7 23:49:23.724654 containerd[1559]: time="2025-11-07T23:49:23.724566278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:49:23.726672 containerd[1559]: time="2025-11-07T23:49:23.726140121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 439.712838ms" Nov 7 23:49:23.727043 containerd[1559]: time="2025-11-07T23:49:23.726968224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.859154ms" Nov 7 23:49:23.728027 containerd[1559]: time="2025-11-07T23:49:23.727998840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.330281ms" Nov 7 23:49:23.750704 containerd[1559]: time="2025-11-07T23:49:23.749514625Z" level=info msg="connecting to shim 35bfd278a3d024917c064bda84e36a45fbcbed72df95011f27feb344d4540cd4" address="unix:///run/containerd/s/159afc199146a3b435c5e1d794d364b7a824b1589bfda1df51a7de81d3247995" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:23.756575 containerd[1559]: time="2025-11-07T23:49:23.756532772Z" level=info msg="connecting to shim 1dbe008380a369f296dc256fa167a9329bc46106cc10318011fceb6d996c35a5" address="unix:///run/containerd/s/15ab29bdf2c8302768c881a1491be7e0efe0c77cc6708210a10f635b9c11f734" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:23.757369 containerd[1559]: time="2025-11-07T23:49:23.757028508Z" level=info msg="connecting to shim 317cdb435a70a0da410a4a62ae3810d832aa2ddce9b3b3195944846e435c7485" address="unix:///run/containerd/s/75ba71dedf95f2a2960ddc9d38fe8e82afb4287179adc368573987d5dc205b88" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:23.786850 systemd[1]: Started cri-containerd-35bfd278a3d024917c064bda84e36a45fbcbed72df95011f27feb344d4540cd4.scope - libcontainer container 35bfd278a3d024917c064bda84e36a45fbcbed72df95011f27feb344d4540cd4. Nov 7 23:49:23.790573 systemd[1]: Started cri-containerd-1dbe008380a369f296dc256fa167a9329bc46106cc10318011fceb6d996c35a5.scope - libcontainer container 1dbe008380a369f296dc256fa167a9329bc46106cc10318011fceb6d996c35a5. Nov 7 23:49:23.792117 systemd[1]: Started cri-containerd-317cdb435a70a0da410a4a62ae3810d832aa2ddce9b3b3195944846e435c7485.scope - libcontainer container 317cdb435a70a0da410a4a62ae3810d832aa2ddce9b3b3195944846e435c7485. Nov 7 23:49:23.838046 containerd[1559]: time="2025-11-07T23:49:23.837917763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"35bfd278a3d024917c064bda84e36a45fbcbed72df95011f27feb344d4540cd4\"" Nov 7 23:49:23.839894 kubelet[2325]: E1107 23:49:23.839769 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:23.840865 containerd[1559]: time="2025-11-07T23:49:23.840785288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"317cdb435a70a0da410a4a62ae3810d832aa2ddce9b3b3195944846e435c7485\"" Nov 7 23:49:23.841821 containerd[1559]: time="2025-11-07T23:49:23.841708895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9efcadec264390c6e6393f2289762437,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dbe008380a369f296dc256fa167a9329bc46106cc10318011fceb6d996c35a5\"" Nov 7 23:49:23.841928 kubelet[2325]: E1107 23:49:23.841899 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:23.842430 kubelet[2325]: E1107 23:49:23.842407 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:23.845943 containerd[1559]: time="2025-11-07T23:49:23.845890673Z" level=info msg="CreateContainer within sandbox \"35bfd278a3d024917c064bda84e36a45fbcbed72df95011f27feb344d4540cd4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 7 23:49:23.848675 containerd[1559]: time="2025-11-07T23:49:23.848164681Z" level=info msg="CreateContainer within sandbox \"317cdb435a70a0da410a4a62ae3810d832aa2ddce9b3b3195944846e435c7485\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 7 23:49:23.850483 containerd[1559]: time="2025-11-07T23:49:23.850089626Z" level=info msg="CreateContainer within sandbox \"1dbe008380a369f296dc256fa167a9329bc46106cc10318011fceb6d996c35a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 7 23:49:23.854426 kubelet[2325]: E1107 23:49:23.854387 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 7 23:49:23.858827 containerd[1559]: time="2025-11-07T23:49:23.858792619Z" level=info msg="Container 61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:23.861143 containerd[1559]: time="2025-11-07T23:49:23.861108128Z" level=info msg="Container 026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:23.862433 containerd[1559]: time="2025-11-07T23:49:23.862402050Z" level=info msg="Container d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:23.870639 containerd[1559]: time="2025-11-07T23:49:23.870591892Z" level=info msg="CreateContainer within sandbox \"317cdb435a70a0da410a4a62ae3810d832aa2ddce9b3b3195944846e435c7485\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e\"" Nov 7 23:49:23.871539 containerd[1559]: time="2025-11-07T23:49:23.871495368Z" level=info msg="StartContainer for \"026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e\"" Nov 7 23:49:23.872759 containerd[1559]: time="2025-11-07T23:49:23.872677448Z" level=info msg="connecting to shim 026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e" address="unix:///run/containerd/s/75ba71dedf95f2a2960ddc9d38fe8e82afb4287179adc368573987d5dc205b88" protocol=ttrpc version=3 Nov 7 23:49:23.874986 containerd[1559]: time="2025-11-07T23:49:23.874949060Z" level=info msg="CreateContainer within sandbox \"35bfd278a3d024917c064bda84e36a45fbcbed72df95011f27feb344d4540cd4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93\"" Nov 7 23:49:23.875824 containerd[1559]: time="2025-11-07T23:49:23.875793900Z" level=info msg="StartContainer for \"61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93\"" Nov 7 23:49:23.876121 containerd[1559]: time="2025-11-07T23:49:23.876072863Z" level=info msg="CreateContainer within sandbox \"1dbe008380a369f296dc256fa167a9329bc46106cc10318011fceb6d996c35a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865\"" Nov 7 23:49:23.876478 containerd[1559]: time="2025-11-07T23:49:23.876433431Z" level=info msg="StartContainer for \"d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865\"" Nov 7 23:49:23.877188 containerd[1559]: time="2025-11-07T23:49:23.877120854Z" level=info msg="connecting to shim 61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93" address="unix:///run/containerd/s/159afc199146a3b435c5e1d794d364b7a824b1589bfda1df51a7de81d3247995" protocol=ttrpc version=3 Nov 7 23:49:23.877532 containerd[1559]: time="2025-11-07T23:49:23.877505467Z" level=info msg="connecting to shim d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865" address="unix:///run/containerd/s/15ab29bdf2c8302768c881a1491be7e0efe0c77cc6708210a10f635b9c11f734" protocol=ttrpc version=3 Nov 7 23:49:23.898831 systemd[1]: Started cri-containerd-026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e.scope - libcontainer container 026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e. Nov 7 23:49:23.900052 systemd[1]: Started cri-containerd-61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93.scope - libcontainer container 61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93. Nov 7 23:49:23.903333 systemd[1]: Started cri-containerd-d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865.scope - libcontainer container d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865. Nov 7 23:49:23.941189 containerd[1559]: time="2025-11-07T23:49:23.940967407Z" level=info msg="StartContainer for \"61a80fc5f4adac41835b0d274ed6704b65eb0f0a3e7812bb2aa9f2a125089e93\" returns successfully" Nov 7 23:49:23.957310 containerd[1559]: time="2025-11-07T23:49:23.957262970Z" level=info msg="StartContainer for \"d03ec2c65e217da7127ce8d7117d933d4beaaae2f04c8bbf8771429c40342865\" returns successfully" Nov 7 23:49:23.958976 containerd[1559]: time="2025-11-07T23:49:23.958912386Z" level=info msg="StartContainer for \"026fd3cac98ee1138936e126f5fa3c2144caba499132e8307aa005e33a61709e\" returns successfully" Nov 7 23:49:23.988700 kubelet[2325]: E1107 23:49:23.988549 2325 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 7 23:49:24.360993 kubelet[2325]: I1107 23:49:24.360960 2325 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:49:24.837398 kubelet[2325]: E1107 23:49:24.837362 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:49:24.837731 kubelet[2325]: E1107 23:49:24.837503 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:24.841203 kubelet[2325]: E1107 23:49:24.841171 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:49:24.841311 kubelet[2325]: E1107 23:49:24.841292 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:24.845816 kubelet[2325]: E1107 23:49:24.845789 2325 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:49:24.845926 kubelet[2325]: E1107 23:49:24.845908 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:25.361791 kubelet[2325]: E1107 23:49:25.361747 2325 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 7 23:49:25.454538 kubelet[2325]: I1107 23:49:25.454083 2325 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 7 23:49:25.454538 kubelet[2325]: E1107 23:49:25.454442 2325 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 7 23:49:25.474694 kubelet[2325]: E1107 23:49:25.474660 2325 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:49:25.575254 kubelet[2325]: E1107 23:49:25.575193 2325 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:49:25.699235 kubelet[2325]: I1107 23:49:25.699118 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:25.712529 kubelet[2325]: E1107 23:49:25.712472 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:25.712529 kubelet[2325]: I1107 23:49:25.712510 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:25.714648 kubelet[2325]: E1107 23:49:25.714555 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:25.714648 kubelet[2325]: I1107 23:49:25.714581 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:25.716754 kubelet[2325]: E1107 23:49:25.716686 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:25.780604 kubelet[2325]: I1107 23:49:25.780578 2325 apiserver.go:52] "Watching apiserver" Nov 7 23:49:25.797404 kubelet[2325]: I1107 23:49:25.797357 2325 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 7 23:49:25.846802 kubelet[2325]: I1107 23:49:25.846085 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:25.846802 kubelet[2325]: I1107 23:49:25.846189 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:25.848085 kubelet[2325]: E1107 23:49:25.848036 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:25.848250 kubelet[2325]: E1107 23:49:25.848232 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:25.848494 kubelet[2325]: E1107 23:49:25.848472 2325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:25.848596 kubelet[2325]: E1107 23:49:25.848581 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:26.848785 kubelet[2325]: I1107 23:49:26.848758 2325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:26.855230 kubelet[2325]: E1107 23:49:26.855173 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:27.695692 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... Nov 7 23:49:27.695707 systemd[1]: Reloading... Nov 7 23:49:27.768716 zram_generator::config[2662]: No configuration found. Nov 7 23:49:27.850333 kubelet[2325]: E1107 23:49:27.850245 2325 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:27.946472 systemd[1]: Reloading finished in 250 ms. Nov 7 23:49:27.978796 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:49:28.001607 systemd[1]: kubelet.service: Deactivated successfully. Nov 7 23:49:28.001886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:28.001946 systemd[1]: kubelet.service: Consumed 1.766s CPU time, 122M memory peak. Nov 7 23:49:28.003916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:49:28.183361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:49:28.195012 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 7 23:49:28.247615 kubelet[2704]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 7 23:49:28.247615 kubelet[2704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 23:49:28.247615 kubelet[2704]: I1107 23:49:28.247580 2704 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 7 23:49:28.255417 kubelet[2704]: I1107 23:49:28.255371 2704 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 7 23:49:28.255417 kubelet[2704]: I1107 23:49:28.255403 2704 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 7 23:49:28.255576 kubelet[2704]: I1107 23:49:28.255437 2704 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 7 23:49:28.255576 kubelet[2704]: I1107 23:49:28.255445 2704 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 7 23:49:28.255985 kubelet[2704]: I1107 23:49:28.255935 2704 server.go:956] "Client rotation is on, will bootstrap in background" Nov 7 23:49:28.257743 kubelet[2704]: I1107 23:49:28.257699 2704 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 7 23:49:28.260402 kubelet[2704]: I1107 23:49:28.260352 2704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 23:49:28.265832 kubelet[2704]: I1107 23:49:28.265191 2704 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 7 23:49:28.267890 kubelet[2704]: I1107 23:49:28.267834 2704 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 7 23:49:28.268127 kubelet[2704]: I1107 23:49:28.268076 2704 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 7 23:49:28.268313 kubelet[2704]: I1107 23:49:28.268127 2704 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 7 23:49:28.268313 kubelet[2704]: I1107 23:49:28.268300 2704 topology_manager.go:138] "Creating topology manager with none policy" Nov 7 23:49:28.268313 kubelet[2704]: I1107 23:49:28.268311 2704 container_manager_linux.go:306] "Creating device plugin manager" Nov 7 23:49:28.268443 kubelet[2704]: I1107 23:49:28.268335 2704 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 7 23:49:28.269327 kubelet[2704]: I1107 23:49:28.269287 2704 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:49:28.269462 kubelet[2704]: I1107 23:49:28.269456 2704 kubelet.go:475] "Attempting to sync node with API server" Nov 7 23:49:28.269616 kubelet[2704]: I1107 23:49:28.269476 2704 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 7 23:49:28.269616 kubelet[2704]: I1107 23:49:28.269503 2704 kubelet.go:387] "Adding apiserver pod source" Nov 7 23:49:28.269616 kubelet[2704]: I1107 23:49:28.269516 2704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 7 23:49:28.271006 kubelet[2704]: I1107 23:49:28.270980 2704 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 7 23:49:28.271753 kubelet[2704]: I1107 23:49:28.271733 2704 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 7 23:49:28.271817 kubelet[2704]: I1107 23:49:28.271769 2704 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 7 23:49:28.275898 kubelet[2704]: I1107 23:49:28.275873 2704 server.go:1262] "Started kubelet" Nov 7 23:49:28.276934 kubelet[2704]: I1107 23:49:28.276884 2704 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 7 23:49:28.277040 kubelet[2704]: I1107 23:49:28.277026 2704 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 7 23:49:28.277319 kubelet[2704]: I1107 23:49:28.277302 2704 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 7 23:49:28.277408 kubelet[2704]: I1107 23:49:28.276964 2704 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 7 23:49:28.278325 kubelet[2704]: I1107 23:49:28.278304 2704 server.go:310] "Adding debug handlers to kubelet server" Nov 7 23:49:28.285473 kubelet[2704]: I1107 23:49:28.283075 2704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 7 23:49:28.290411 kubelet[2704]: I1107 23:49:28.290319 2704 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 7 23:49:28.290806 kubelet[2704]: E1107 23:49:28.290783 2704 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:49:28.290920 kubelet[2704]: I1107 23:49:28.290908 2704 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 7 23:49:28.291128 kubelet[2704]: I1107 23:49:28.291114 2704 reconciler.go:29] "Reconciler: start to sync state" Nov 7 23:49:28.294824 kubelet[2704]: I1107 23:49:28.294786 2704 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 7 23:49:28.295437 kubelet[2704]: E1107 23:49:28.295383 2704 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 7 23:49:28.295702 kubelet[2704]: I1107 23:49:28.295679 2704 factory.go:223] Registration of the systemd container factory successfully Nov 7 23:49:28.295810 kubelet[2704]: I1107 23:49:28.295789 2704 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 7 23:49:28.306048 kubelet[2704]: I1107 23:49:28.305740 2704 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 7 23:49:28.307315 kubelet[2704]: I1107 23:49:28.307259 2704 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 7 23:49:28.307315 kubelet[2704]: I1107 23:49:28.307290 2704 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 7 23:49:28.307315 kubelet[2704]: I1107 23:49:28.307318 2704 kubelet.go:2427] "Starting kubelet main sync loop" Nov 7 23:49:28.307438 kubelet[2704]: E1107 23:49:28.307361 2704 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 7 23:49:28.311675 kubelet[2704]: I1107 23:49:28.309819 2704 factory.go:223] Registration of the containerd container factory successfully Nov 7 23:49:28.348328 kubelet[2704]: I1107 23:49:28.348299 2704 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 7 23:49:28.348328 kubelet[2704]: I1107 23:49:28.348316 2704 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 7 23:49:28.348328 kubelet[2704]: I1107 23:49:28.348338 2704 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:49:28.348486 kubelet[2704]: I1107 23:49:28.348468 2704 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 7 23:49:28.348517 kubelet[2704]: I1107 23:49:28.348477 2704 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 7 23:49:28.348517 kubelet[2704]: I1107 23:49:28.348492 2704 policy_none.go:49] "None policy: Start" Nov 7 23:49:28.348517 kubelet[2704]: I1107 23:49:28.348500 2704 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 7 23:49:28.348517 kubelet[2704]: I1107 23:49:28.348509 2704 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 7 23:49:28.348879 kubelet[2704]: I1107 23:49:28.348599 2704 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 7 23:49:28.348879 kubelet[2704]: I1107 23:49:28.348614 2704 policy_none.go:47] "Start" Nov 7 23:49:28.354131 kubelet[2704]: E1107 23:49:28.354098 2704 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 7 23:49:28.354284 kubelet[2704]: I1107 23:49:28.354262 2704 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 7 23:49:28.354319 kubelet[2704]: I1107 23:49:28.354293 2704 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 7 23:49:28.354831 kubelet[2704]: I1107 23:49:28.354796 2704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 7 23:49:28.356534 kubelet[2704]: E1107 23:49:28.356508 2704 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 7 23:49:28.409097 kubelet[2704]: I1107 23:49:28.409061 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:28.409097 kubelet[2704]: I1107 23:49:28.409082 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:28.409488 kubelet[2704]: I1107 23:49:28.409467 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:28.415704 kubelet[2704]: E1107 23:49:28.415669 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:28.458179 kubelet[2704]: I1107 23:49:28.458140 2704 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:49:28.466938 kubelet[2704]: I1107 23:49:28.466851 2704 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 7 23:49:28.467072 kubelet[2704]: I1107 23:49:28.466994 2704 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 7 23:49:28.492711 kubelet[2704]: I1107 23:49:28.492595 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9efcadec264390c6e6393f2289762437-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9efcadec264390c6e6393f2289762437\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:28.492711 kubelet[2704]: I1107 23:49:28.492650 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:28.492870 kubelet[2704]: I1107 23:49:28.492736 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:28.492870 kubelet[2704]: I1107 23:49:28.492785 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:28.492870 kubelet[2704]: I1107 23:49:28.492801 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:28.492870 kubelet[2704]: I1107 23:49:28.492819 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9efcadec264390c6e6393f2289762437-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9efcadec264390c6e6393f2289762437\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:28.492870 kubelet[2704]: I1107 23:49:28.492837 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:49:28.492988 kubelet[2704]: I1107 23:49:28.492851 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:28.492988 kubelet[2704]: I1107 23:49:28.492865 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9efcadec264390c6e6393f2289762437-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9efcadec264390c6e6393f2289762437\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:28.716278 kubelet[2704]: E1107 23:49:28.716038 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:28.716278 kubelet[2704]: E1107 23:49:28.716107 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:28.716278 kubelet[2704]: E1107 23:49:28.716221 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:29.272967 kubelet[2704]: I1107 23:49:29.272918 2704 apiserver.go:52] "Watching apiserver" Nov 7 23:49:29.291844 kubelet[2704]: I1107 23:49:29.291794 2704 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 7 23:49:29.328039 kubelet[2704]: E1107 23:49:29.327992 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:29.329053 kubelet[2704]: I1107 23:49:29.329004 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:29.329850 kubelet[2704]: I1107 23:49:29.329828 2704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:29.335651 kubelet[2704]: E1107 23:49:29.334095 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 7 23:49:29.335651 kubelet[2704]: E1107 23:49:29.334244 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:29.337912 kubelet[2704]: E1107 23:49:29.337882 2704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 7 23:49:29.338068 kubelet[2704]: E1107 23:49:29.338050 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:29.355739 kubelet[2704]: I1107 23:49:29.355672 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.355656942 podStartE2EDuration="3.355656942s" podCreationTimestamp="2025-11-07 23:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:49:29.355281304 +0000 UTC m=+1.155142613" watchObservedRunningTime="2025-11-07 23:49:29.355656942 +0000 UTC m=+1.155518251" Nov 7 23:49:29.367966 kubelet[2704]: I1107 23:49:29.367878 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3678615330000001 podStartE2EDuration="1.367861533s" podCreationTimestamp="2025-11-07 23:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:49:29.367173396 +0000 UTC m=+1.167034705" watchObservedRunningTime="2025-11-07 23:49:29.367861533 +0000 UTC m=+1.167722841" Nov 7 23:49:29.378739 kubelet[2704]: I1107 23:49:29.378677 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.378659879 podStartE2EDuration="1.378659879s" podCreationTimestamp="2025-11-07 23:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:49:29.37798497 +0000 UTC m=+1.177846279" watchObservedRunningTime="2025-11-07 23:49:29.378659879 +0000 UTC m=+1.178521228" Nov 7 23:49:30.329114 kubelet[2704]: E1107 23:49:30.329029 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:30.329421 kubelet[2704]: E1107 23:49:30.329405 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:31.330795 kubelet[2704]: E1107 23:49:31.330768 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:32.406944 kubelet[2704]: E1107 23:49:32.406911 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:35.342389 kubelet[2704]: I1107 23:49:35.342351 2704 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 7 23:49:35.343076 containerd[1559]: time="2025-11-07T23:49:35.342976886Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 7 23:49:35.343359 kubelet[2704]: I1107 23:49:35.343193 2704 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 7 23:49:35.962664 systemd[1]: Created slice kubepods-besteffort-pod1c9d5a30_a069_48e5_9d8a_748f3a4a612f.slice - libcontainer container kubepods-besteffort-pod1c9d5a30_a069_48e5_9d8a_748f3a4a612f.slice. Nov 7 23:49:36.048124 kubelet[2704]: I1107 23:49:36.048071 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c9d5a30-a069-48e5-9d8a-748f3a4a612f-xtables-lock\") pod \"kube-proxy-ml46l\" (UID: \"1c9d5a30-a069-48e5-9d8a-748f3a4a612f\") " pod="kube-system/kube-proxy-ml46l" Nov 7 23:49:36.048266 kubelet[2704]: I1107 23:49:36.048156 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5lr\" (UniqueName: \"kubernetes.io/projected/1c9d5a30-a069-48e5-9d8a-748f3a4a612f-kube-api-access-5r5lr\") pod \"kube-proxy-ml46l\" (UID: \"1c9d5a30-a069-48e5-9d8a-748f3a4a612f\") " pod="kube-system/kube-proxy-ml46l" Nov 7 23:49:36.048266 kubelet[2704]: I1107 23:49:36.048221 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c9d5a30-a069-48e5-9d8a-748f3a4a612f-kube-proxy\") pod \"kube-proxy-ml46l\" (UID: \"1c9d5a30-a069-48e5-9d8a-748f3a4a612f\") " pod="kube-system/kube-proxy-ml46l" Nov 7 23:49:36.048266 kubelet[2704]: I1107 23:49:36.048240 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9d5a30-a069-48e5-9d8a-748f3a4a612f-lib-modules\") pod \"kube-proxy-ml46l\" (UID: \"1c9d5a30-a069-48e5-9d8a-748f3a4a612f\") " pod="kube-system/kube-proxy-ml46l" Nov 7 23:49:36.158850 kubelet[2704]: E1107 23:49:36.158814 2704 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 7 23:49:36.158850 kubelet[2704]: E1107 23:49:36.158848 2704 projected.go:196] Error preparing data for projected volume kube-api-access-5r5lr for pod kube-system/kube-proxy-ml46l: configmap "kube-root-ca.crt" not found Nov 7 23:49:36.159006 kubelet[2704]: E1107 23:49:36.158923 2704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c9d5a30-a069-48e5-9d8a-748f3a4a612f-kube-api-access-5r5lr podName:1c9d5a30-a069-48e5-9d8a-748f3a4a612f nodeName:}" failed. No retries permitted until 2025-11-07 23:49:36.658900476 +0000 UTC m=+8.458761745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5r5lr" (UniqueName: "kubernetes.io/projected/1c9d5a30-a069-48e5-9d8a-748f3a4a612f-kube-api-access-5r5lr") pod "kube-proxy-ml46l" (UID: "1c9d5a30-a069-48e5-9d8a-748f3a4a612f") : configmap "kube-root-ca.crt" not found Nov 7 23:49:36.461868 systemd[1]: Created slice kubepods-besteffort-pod86e2da72_b575_42de_a2d4_977bd5433f90.slice - libcontainer container kubepods-besteffort-pod86e2da72_b575_42de_a2d4_977bd5433f90.slice. Nov 7 23:49:36.551807 kubelet[2704]: I1107 23:49:36.551732 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86e2da72-b575-42de-a2d4-977bd5433f90-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-rffnv\" (UID: \"86e2da72-b575-42de-a2d4-977bd5433f90\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-rffnv" Nov 7 23:49:36.551807 kubelet[2704]: I1107 23:49:36.551810 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667l9\" (UniqueName: \"kubernetes.io/projected/86e2da72-b575-42de-a2d4-977bd5433f90-kube-api-access-667l9\") pod \"tigera-operator-65cdcdfd6d-rffnv\" (UID: \"86e2da72-b575-42de-a2d4-977bd5433f90\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-rffnv" Nov 7 23:49:36.762280 kubelet[2704]: E1107 23:49:36.761941 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:36.770750 containerd[1559]: time="2025-11-07T23:49:36.770707078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-rffnv,Uid:86e2da72-b575-42de-a2d4-977bd5433f90,Namespace:tigera-operator,Attempt:0,}" Nov 7 23:49:36.792038 containerd[1559]: time="2025-11-07T23:49:36.791993337Z" level=info msg="connecting to shim a42ee1ced47ebaa75c404cdabcdc4c57449aed0e35ec68ca3a642902e591eabe" address="unix:///run/containerd/s/6bafbd2c328b60f2c4415a23f55bd3435d510ef811a662e12da937a86fc20f92" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:36.829874 systemd[1]: Started cri-containerd-a42ee1ced47ebaa75c404cdabcdc4c57449aed0e35ec68ca3a642902e591eabe.scope - libcontainer container a42ee1ced47ebaa75c404cdabcdc4c57449aed0e35ec68ca3a642902e591eabe. Nov 7 23:49:36.870441 kubelet[2704]: E1107 23:49:36.870406 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:36.871595 containerd[1559]: time="2025-11-07T23:49:36.871554879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ml46l,Uid:1c9d5a30-a069-48e5-9d8a-748f3a4a612f,Namespace:kube-system,Attempt:0,}" Nov 7 23:49:36.875796 containerd[1559]: time="2025-11-07T23:49:36.875751105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-rffnv,Uid:86e2da72-b575-42de-a2d4-977bd5433f90,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a42ee1ced47ebaa75c404cdabcdc4c57449aed0e35ec68ca3a642902e591eabe\"" Nov 7 23:49:36.878282 containerd[1559]: time="2025-11-07T23:49:36.878242456Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 7 23:49:36.897151 containerd[1559]: time="2025-11-07T23:49:36.897108000Z" level=info msg="connecting to shim 3a4758b9b8bcd894e2af589cf0f92b9eae0418ac7b061d277e142dca88ec11e2" address="unix:///run/containerd/s/9894bf128ef584c9e387ef0c977465e62ec3cf32c2cd6e1a29fc772109865546" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:36.916837 systemd[1]: Started cri-containerd-3a4758b9b8bcd894e2af589cf0f92b9eae0418ac7b061d277e142dca88ec11e2.scope - libcontainer container 3a4758b9b8bcd894e2af589cf0f92b9eae0418ac7b061d277e142dca88ec11e2. Nov 7 23:49:36.954318 containerd[1559]: time="2025-11-07T23:49:36.954273884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ml46l,Uid:1c9d5a30-a069-48e5-9d8a-748f3a4a612f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a4758b9b8bcd894e2af589cf0f92b9eae0418ac7b061d277e142dca88ec11e2\"" Nov 7 23:49:36.955265 kubelet[2704]: E1107 23:49:36.955245 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:36.979333 containerd[1559]: time="2025-11-07T23:49:36.979249919Z" level=info msg="CreateContainer within sandbox \"3a4758b9b8bcd894e2af589cf0f92b9eae0418ac7b061d277e142dca88ec11e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 7 23:49:36.992024 containerd[1559]: time="2025-11-07T23:49:36.991962359Z" level=info msg="Container 4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:37.000330 containerd[1559]: time="2025-11-07T23:49:37.000257869Z" level=info msg="CreateContainer within sandbox \"3a4758b9b8bcd894e2af589cf0f92b9eae0418ac7b061d277e142dca88ec11e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96\"" Nov 7 23:49:37.001042 containerd[1559]: time="2025-11-07T23:49:37.001017243Z" level=info msg="StartContainer for \"4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96\"" Nov 7 23:49:37.002611 containerd[1559]: time="2025-11-07T23:49:37.002572324Z" level=info msg="connecting to shim 4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96" address="unix:///run/containerd/s/9894bf128ef584c9e387ef0c977465e62ec3cf32c2cd6e1a29fc772109865546" protocol=ttrpc version=3 Nov 7 23:49:37.031858 systemd[1]: Started cri-containerd-4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96.scope - libcontainer container 4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96. Nov 7 23:49:37.113993 containerd[1559]: time="2025-11-07T23:49:37.113952616Z" level=info msg="StartContainer for \"4e8fbc7e4dbe1440405ed636f41f5f328efc0e264f0c4db0bf90007d78c92c96\" returns successfully" Nov 7 23:49:37.345290 kubelet[2704]: E1107 23:49:37.345195 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:37.351956 kubelet[2704]: E1107 23:49:37.350102 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:37.371218 kubelet[2704]: I1107 23:49:37.371148 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ml46l" podStartSLOduration=2.371132522 podStartE2EDuration="2.371132522s" podCreationTimestamp="2025-11-07 23:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:49:37.370516596 +0000 UTC m=+9.170377905" watchObservedRunningTime="2025-11-07 23:49:37.371132522 +0000 UTC m=+9.170993871" Nov 7 23:49:38.351554 kubelet[2704]: E1107 23:49:38.349587 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:38.643043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750484384.mount: Deactivated successfully. Nov 7 23:49:39.526777 kubelet[2704]: E1107 23:49:39.526467 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:40.090417 containerd[1559]: time="2025-11-07T23:49:40.090372306Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:40.091281 containerd[1559]: time="2025-11-07T23:49:40.090829609Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 7 23:49:40.091659 containerd[1559]: time="2025-11-07T23:49:40.091619795Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:40.093681 containerd[1559]: time="2025-11-07T23:49:40.093647434Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:40.094479 containerd[1559]: time="2025-11-07T23:49:40.094416469Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.216135836s" Nov 7 23:49:40.094479 containerd[1559]: time="2025-11-07T23:49:40.094466805Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 7 23:49:40.099071 containerd[1559]: time="2025-11-07T23:49:40.099024166Z" level=info msg="CreateContainer within sandbox \"a42ee1ced47ebaa75c404cdabcdc4c57449aed0e35ec68ca3a642902e591eabe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 7 23:49:40.105147 containerd[1559]: time="2025-11-07T23:49:40.105111920Z" level=info msg="Container 268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:40.111324 containerd[1559]: time="2025-11-07T23:49:40.111272281Z" level=info msg="CreateContainer within sandbox \"a42ee1ced47ebaa75c404cdabcdc4c57449aed0e35ec68ca3a642902e591eabe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933\"" Nov 7 23:49:40.111874 containerd[1559]: time="2025-11-07T23:49:40.111836893Z" level=info msg="StartContainer for \"268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933\"" Nov 7 23:49:40.114048 containerd[1559]: time="2025-11-07T23:49:40.114010543Z" level=info msg="connecting to shim 268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933" address="unix:///run/containerd/s/6bafbd2c328b60f2c4415a23f55bd3435d510ef811a662e12da937a86fc20f92" protocol=ttrpc version=3 Nov 7 23:49:40.174829 systemd[1]: Started cri-containerd-268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933.scope - libcontainer container 268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933. Nov 7 23:49:40.202004 containerd[1559]: time="2025-11-07T23:49:40.201967979Z" level=info msg="StartContainer for \"268c0a81f2ede0bb746e35c47f764d646888cd43d16098da9afa983d4e92c933\" returns successfully" Nov 7 23:49:40.362666 kubelet[2704]: E1107 23:49:40.362511 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:42.417022 kubelet[2704]: E1107 23:49:42.416980 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:42.427830 kubelet[2704]: I1107 23:49:42.427736 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-rffnv" podStartSLOduration=3.210209337 podStartE2EDuration="6.427717818s" podCreationTimestamp="2025-11-07 23:49:36 +0000 UTC" firstStartedPulling="2025-11-07 23:49:36.877808162 +0000 UTC m=+8.677669471" lastFinishedPulling="2025-11-07 23:49:40.095316643 +0000 UTC m=+11.895177952" observedRunningTime="2025-11-07 23:49:40.427509291 +0000 UTC m=+12.227370600" watchObservedRunningTime="2025-11-07 23:49:42.427717818 +0000 UTC m=+14.227579087" Nov 7 23:49:43.898454 update_engine[1544]: I20251107 23:49:43.898377 1544 update_attempter.cc:509] Updating boot flags... Nov 7 23:49:45.662693 sudo[1776]: pam_unix(sudo:session): session closed for user root Nov 7 23:49:45.664598 sshd[1775]: Connection closed by 10.0.0.1 port 49616 Nov 7 23:49:45.665212 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Nov 7 23:49:45.670529 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Nov 7 23:49:45.671967 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:49616.service: Deactivated successfully. Nov 7 23:49:45.674369 systemd[1]: session-7.scope: Deactivated successfully. Nov 7 23:49:45.674660 systemd[1]: session-7.scope: Consumed 7.186s CPU time, 213.3M memory peak. Nov 7 23:49:45.676996 systemd-logind[1543]: Removed session 7. Nov 7 23:49:53.100423 systemd[1]: Created slice kubepods-besteffort-pod1aff498b_19f2_4ca3_a2a0_bee2bb5cffe9.slice - libcontainer container kubepods-besteffort-pod1aff498b_19f2_4ca3_a2a0_bee2bb5cffe9.slice. Nov 7 23:49:53.167612 kubelet[2704]: I1107 23:49:53.167526 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9-tigera-ca-bundle\") pod \"calico-typha-59854cd4c4-n7cr2\" (UID: \"1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9\") " pod="calico-system/calico-typha-59854cd4c4-n7cr2" Nov 7 23:49:53.167612 kubelet[2704]: I1107 23:49:53.167614 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cznm5\" (UniqueName: \"kubernetes.io/projected/1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9-kube-api-access-cznm5\") pod \"calico-typha-59854cd4c4-n7cr2\" (UID: \"1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9\") " pod="calico-system/calico-typha-59854cd4c4-n7cr2" Nov 7 23:49:53.168208 kubelet[2704]: I1107 23:49:53.167738 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9-typha-certs\") pod \"calico-typha-59854cd4c4-n7cr2\" (UID: \"1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9\") " pod="calico-system/calico-typha-59854cd4c4-n7cr2" Nov 7 23:49:53.281346 systemd[1]: Created slice kubepods-besteffort-pod7bc6c65e_fb9e_412f_8c70_60bf90134aa6.slice - libcontainer container kubepods-besteffort-pod7bc6c65e_fb9e_412f_8c70_60bf90134aa6.slice. Nov 7 23:49:53.370679 kubelet[2704]: I1107 23:49:53.370545 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-cni-bin-dir\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.370869 kubelet[2704]: I1107 23:49:53.370852 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-cni-net-dir\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.370939 kubelet[2704]: I1107 23:49:53.370928 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-var-lib-calico\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.370998 kubelet[2704]: I1107 23:49:53.370986 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djrx\" (UniqueName: \"kubernetes.io/projected/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-kube-api-access-8djrx\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371091 kubelet[2704]: I1107 23:49:53.371075 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-lib-modules\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371351 kubelet[2704]: I1107 23:49:53.371156 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-policysync\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371351 kubelet[2704]: I1107 23:49:53.371199 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-tigera-ca-bundle\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371351 kubelet[2704]: I1107 23:49:53.371216 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-cni-log-dir\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371351 kubelet[2704]: I1107 23:49:53.371232 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-node-certs\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371351 kubelet[2704]: I1107 23:49:53.371260 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-var-run-calico\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371492 kubelet[2704]: I1107 23:49:53.371305 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-xtables-lock\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.371492 kubelet[2704]: I1107 23:49:53.371377 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bc6c65e-fb9e-412f-8c70-60bf90134aa6-flexvol-driver-host\") pod \"calico-node-mhp7h\" (UID: \"7bc6c65e-fb9e-412f-8c70-60bf90134aa6\") " pod="calico-system/calico-node-mhp7h" Nov 7 23:49:53.408483 kubelet[2704]: E1107 23:49:53.408446 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:53.409252 containerd[1559]: time="2025-11-07T23:49:53.409169439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59854cd4c4-n7cr2,Uid:1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9,Namespace:calico-system,Attempt:0,}" Nov 7 23:49:53.456913 containerd[1559]: time="2025-11-07T23:49:53.456860355Z" level=info msg="connecting to shim f1a548f6ddba22289dbf86cadee9ec4610edefe6631d6c15b7c568be67fb1201" address="unix:///run/containerd/s/70a554ff7c96437e182a7b1622437db9b0e27a4114ec88aba67bbb2a3a96af80" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:53.458896 kubelet[2704]: E1107 23:49:53.458040 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:49:53.476369 kubelet[2704]: E1107 23:49:53.476318 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.476369 kubelet[2704]: W1107 23:49:53.476348 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.476369 kubelet[2704]: E1107 23:49:53.476371 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.478437 kubelet[2704]: E1107 23:49:53.477737 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.478437 kubelet[2704]: W1107 23:49:53.477762 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.478437 kubelet[2704]: E1107 23:49:53.477782 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.478437 kubelet[2704]: E1107 23:49:53.478124 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.478437 kubelet[2704]: W1107 23:49:53.478149 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.478437 kubelet[2704]: E1107 23:49:53.478159 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.478781 kubelet[2704]: E1107 23:49:53.478487 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.478781 kubelet[2704]: W1107 23:49:53.478499 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.478781 kubelet[2704]: E1107 23:49:53.478509 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.479374 kubelet[2704]: E1107 23:49:53.479343 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.479374 kubelet[2704]: W1107 23:49:53.479364 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.479479 kubelet[2704]: E1107 23:49:53.479380 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.480530 kubelet[2704]: E1107 23:49:53.480149 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.480530 kubelet[2704]: W1107 23:49:53.480170 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.480530 kubelet[2704]: E1107 23:49:53.480184 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.489776 kubelet[2704]: E1107 23:49:53.489746 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.490036 kubelet[2704]: W1107 23:49:53.489905 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.490036 kubelet[2704]: E1107 23:49:53.489935 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.495751 kubelet[2704]: E1107 23:49:53.495717 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.495751 kubelet[2704]: W1107 23:49:53.495745 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.495902 kubelet[2704]: E1107 23:49:53.495767 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.511012 systemd[1]: Started cri-containerd-f1a548f6ddba22289dbf86cadee9ec4610edefe6631d6c15b7c568be67fb1201.scope - libcontainer container f1a548f6ddba22289dbf86cadee9ec4610edefe6631d6c15b7c568be67fb1201. Nov 7 23:49:53.539155 kubelet[2704]: E1107 23:49:53.538519 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.539155 kubelet[2704]: W1107 23:49:53.539005 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.539155 kubelet[2704]: E1107 23:49:53.539038 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.541748 kubelet[2704]: E1107 23:49:53.541715 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.541809 kubelet[2704]: W1107 23:49:53.541737 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.541809 kubelet[2704]: E1107 23:49:53.541788 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542079 kubelet[2704]: E1107 23:49:53.542060 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542079 kubelet[2704]: W1107 23:49:53.542072 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542079 kubelet[2704]: E1107 23:49:53.542081 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542263 kubelet[2704]: E1107 23:49:53.542241 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542263 kubelet[2704]: W1107 23:49:53.542255 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542263 kubelet[2704]: E1107 23:49:53.542264 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542426 kubelet[2704]: E1107 23:49:53.542408 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542426 kubelet[2704]: W1107 23:49:53.542419 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542426 kubelet[2704]: E1107 23:49:53.542427 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542556 kubelet[2704]: E1107 23:49:53.542540 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542556 kubelet[2704]: W1107 23:49:53.542551 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542608 kubelet[2704]: E1107 23:49:53.542561 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542698 kubelet[2704]: E1107 23:49:53.542683 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542698 kubelet[2704]: W1107 23:49:53.542694 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542760 kubelet[2704]: E1107 23:49:53.542703 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542833 kubelet[2704]: E1107 23:49:53.542816 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542833 kubelet[2704]: W1107 23:49:53.542827 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542833 kubelet[2704]: E1107 23:49:53.542833 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.542999 kubelet[2704]: E1107 23:49:53.542972 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.542999 kubelet[2704]: W1107 23:49:53.542992 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.542999 kubelet[2704]: E1107 23:49:53.543001 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.543148 kubelet[2704]: E1107 23:49:53.543132 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.543148 kubelet[2704]: W1107 23:49:53.543142 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.543202 kubelet[2704]: E1107 23:49:53.543157 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.543301 kubelet[2704]: E1107 23:49:53.543283 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.543301 kubelet[2704]: W1107 23:49:53.543296 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.543361 kubelet[2704]: E1107 23:49:53.543307 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.543454 kubelet[2704]: E1107 23:49:53.543438 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.543454 kubelet[2704]: W1107 23:49:53.543448 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.543454 kubelet[2704]: E1107 23:49:53.543456 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.543607 kubelet[2704]: E1107 23:49:53.543590 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.543607 kubelet[2704]: W1107 23:49:53.543601 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.543607 kubelet[2704]: E1107 23:49:53.543608 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.543814 kubelet[2704]: E1107 23:49:53.543783 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.543814 kubelet[2704]: W1107 23:49:53.543790 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.543814 kubelet[2704]: E1107 23:49:53.543798 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.543947 kubelet[2704]: E1107 23:49:53.543928 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.543947 kubelet[2704]: W1107 23:49:53.543938 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.543947 kubelet[2704]: E1107 23:49:53.543945 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.544096 kubelet[2704]: E1107 23:49:53.544079 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.544096 kubelet[2704]: W1107 23:49:53.544090 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.544149 kubelet[2704]: E1107 23:49:53.544098 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.544254 kubelet[2704]: E1107 23:49:53.544232 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.544254 kubelet[2704]: W1107 23:49:53.544243 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.544254 kubelet[2704]: E1107 23:49:53.544250 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.544396 kubelet[2704]: E1107 23:49:53.544379 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.544396 kubelet[2704]: W1107 23:49:53.544391 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.544446 kubelet[2704]: E1107 23:49:53.544399 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.544540 kubelet[2704]: E1107 23:49:53.544525 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.544540 kubelet[2704]: W1107 23:49:53.544535 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.544651 kubelet[2704]: E1107 23:49:53.544542 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.544678 kubelet[2704]: E1107 23:49:53.544662 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.544678 kubelet[2704]: W1107 23:49:53.544670 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.544722 kubelet[2704]: E1107 23:49:53.544679 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.553791 containerd[1559]: time="2025-11-07T23:49:53.553730441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59854cd4c4-n7cr2,Uid:1aff498b-19f2-4ca3-a2a0-bee2bb5cffe9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1a548f6ddba22289dbf86cadee9ec4610edefe6631d6c15b7c568be67fb1201\"" Nov 7 23:49:53.555952 kubelet[2704]: E1107 23:49:53.555925 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:53.557661 containerd[1559]: time="2025-11-07T23:49:53.557432603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 7 23:49:53.573816 kubelet[2704]: E1107 23:49:53.573786 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.574047 kubelet[2704]: W1107 23:49:53.573958 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.574176 kubelet[2704]: E1107 23:49:53.574160 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.574265 kubelet[2704]: I1107 23:49:53.574251 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c-registration-dir\") pod \"csi-node-driver-nxrk8\" (UID: \"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c\") " pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:49:53.574574 kubelet[2704]: E1107 23:49:53.574557 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.574702 kubelet[2704]: W1107 23:49:53.574656 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.574702 kubelet[2704]: E1107 23:49:53.574677 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.574924 kubelet[2704]: I1107 23:49:53.574907 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c-varrun\") pod \"csi-node-driver-nxrk8\" (UID: \"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c\") " pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:49:53.575080 kubelet[2704]: E1107 23:49:53.575045 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.575080 kubelet[2704]: W1107 23:49:53.575057 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.575080 kubelet[2704]: E1107 23:49:53.575067 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.575514 kubelet[2704]: E1107 23:49:53.575384 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.575514 kubelet[2704]: W1107 23:49:53.575398 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.575514 kubelet[2704]: E1107 23:49:53.575409 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.576040 kubelet[2704]: E1107 23:49:53.575953 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.576040 kubelet[2704]: W1107 23:49:53.575968 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.576040 kubelet[2704]: E1107 23:49:53.575989 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.576315 kubelet[2704]: E1107 23:49:53.576301 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.576376 kubelet[2704]: W1107 23:49:53.576365 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.576430 kubelet[2704]: E1107 23:49:53.576419 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.576777 kubelet[2704]: E1107 23:49:53.576760 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.577052 kubelet[2704]: W1107 23:49:53.576884 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.577052 kubelet[2704]: E1107 23:49:53.576927 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.577052 kubelet[2704]: I1107 23:49:53.576960 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c-kubelet-dir\") pod \"csi-node-driver-nxrk8\" (UID: \"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c\") " pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:49:53.577434 kubelet[2704]: E1107 23:49:53.577416 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.577480 kubelet[2704]: W1107 23:49:53.577447 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.577480 kubelet[2704]: E1107 23:49:53.577466 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.577746 kubelet[2704]: E1107 23:49:53.577732 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.577798 kubelet[2704]: W1107 23:49:53.577781 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.577839 kubelet[2704]: E1107 23:49:53.577800 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.578750 kubelet[2704]: E1107 23:49:53.578698 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.578750 kubelet[2704]: W1107 23:49:53.578712 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.578750 kubelet[2704]: E1107 23:49:53.578724 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.578750 kubelet[2704]: I1107 23:49:53.578747 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c-socket-dir\") pod \"csi-node-driver-nxrk8\" (UID: \"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c\") " pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:49:53.579721 kubelet[2704]: E1107 23:49:53.578911 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.579721 kubelet[2704]: W1107 23:49:53.578927 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.579721 kubelet[2704]: E1107 23:49:53.578937 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.579721 kubelet[2704]: I1107 23:49:53.578949 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq94w\" (UniqueName: \"kubernetes.io/projected/cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c-kube-api-access-dq94w\") pod \"csi-node-driver-nxrk8\" (UID: \"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c\") " pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:49:53.579721 kubelet[2704]: E1107 23:49:53.579155 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.579721 kubelet[2704]: W1107 23:49:53.579166 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.579721 kubelet[2704]: E1107 23:49:53.579175 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.579721 kubelet[2704]: E1107 23:49:53.579357 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.579721 kubelet[2704]: W1107 23:49:53.579365 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.580034 kubelet[2704]: E1107 23:49:53.579374 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.580034 kubelet[2704]: E1107 23:49:53.579505 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.580034 kubelet[2704]: W1107 23:49:53.579512 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.580034 kubelet[2704]: E1107 23:49:53.579519 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.580034 kubelet[2704]: E1107 23:49:53.579639 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.580034 kubelet[2704]: W1107 23:49:53.579647 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.580034 kubelet[2704]: E1107 23:49:53.579654 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.588627 kubelet[2704]: E1107 23:49:53.588588 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:53.589102 containerd[1559]: time="2025-11-07T23:49:53.589060088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mhp7h,Uid:7bc6c65e-fb9e-412f-8c70-60bf90134aa6,Namespace:calico-system,Attempt:0,}" Nov 7 23:49:53.611341 containerd[1559]: time="2025-11-07T23:49:53.611294895Z" level=info msg="connecting to shim 1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2" address="unix:///run/containerd/s/4752746aac49383b525ce27b96fd2810a02524fa745c6e9cb0eaff77150f3ccb" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:49:53.637837 systemd[1]: Started cri-containerd-1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2.scope - libcontainer container 1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2. Nov 7 23:49:53.684445 kubelet[2704]: E1107 23:49:53.683969 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.684445 kubelet[2704]: W1107 23:49:53.683992 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.684445 kubelet[2704]: E1107 23:49:53.684018 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.684445 kubelet[2704]: E1107 23:49:53.684287 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.684445 kubelet[2704]: W1107 23:49:53.684296 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.684445 kubelet[2704]: E1107 23:49:53.684304 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.684501 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687181 kubelet[2704]: W1107 23:49:53.684510 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.684519 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.684717 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687181 kubelet[2704]: W1107 23:49:53.684725 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.684741 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.685038 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687181 kubelet[2704]: W1107 23:49:53.685050 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.685059 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687181 kubelet[2704]: E1107 23:49:53.685291 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687377 containerd[1559]: time="2025-11-07T23:49:53.686759364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mhp7h,Uid:7bc6c65e-fb9e-412f-8c70-60bf90134aa6,Namespace:calico-system,Attempt:0,} returns sandbox id \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\"" Nov 7 23:49:53.687410 kubelet[2704]: W1107 23:49:53.685304 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.687410 kubelet[2704]: E1107 23:49:53.685317 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687410 kubelet[2704]: E1107 23:49:53.685627 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687410 kubelet[2704]: W1107 23:49:53.685654 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.687410 kubelet[2704]: E1107 23:49:53.685662 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687410 kubelet[2704]: E1107 23:49:53.685807 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687410 kubelet[2704]: W1107 23:49:53.685828 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.687410 kubelet[2704]: E1107 23:49:53.685838 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.687410 kubelet[2704]: E1107 23:49:53.686017 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.687410 kubelet[2704]: W1107 23:49:53.686026 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686034 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686208 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.689232 kubelet[2704]: W1107 23:49:53.686223 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686232 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686435 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.689232 kubelet[2704]: W1107 23:49:53.686444 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686453 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686628 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.689232 kubelet[2704]: W1107 23:49:53.686663 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.689232 kubelet[2704]: E1107 23:49:53.686672 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.689575 kubelet[2704]: E1107 23:49:53.687968 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.689575 kubelet[2704]: W1107 23:49:53.687983 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.689575 kubelet[2704]: E1107 23:49:53.687995 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.689575 kubelet[2704]: E1107 23:49:53.688488 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.689575 kubelet[2704]: W1107 23:49:53.688502 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.689575 kubelet[2704]: E1107 23:49:53.688513 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.689575 kubelet[2704]: E1107 23:49:53.688695 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:53.690679 kubelet[2704]: E1107 23:49:53.689800 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.690679 kubelet[2704]: W1107 23:49:53.689814 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.690679 kubelet[2704]: E1107 23:49:53.689829 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.690679 kubelet[2704]: E1107 23:49:53.690344 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.690679 kubelet[2704]: W1107 23:49:53.690355 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.690679 kubelet[2704]: E1107 23:49:53.690382 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.691667 kubelet[2704]: E1107 23:49:53.691400 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.691667 kubelet[2704]: W1107 23:49:53.691553 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.691667 kubelet[2704]: E1107 23:49:53.691573 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.692381 kubelet[2704]: E1107 23:49:53.692331 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.692381 kubelet[2704]: W1107 23:49:53.692351 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.692381 kubelet[2704]: E1107 23:49:53.692365 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.692639 kubelet[2704]: E1107 23:49:53.692586 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.692639 kubelet[2704]: W1107 23:49:53.692601 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.692639 kubelet[2704]: E1107 23:49:53.692624 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.692982 kubelet[2704]: E1107 23:49:53.692929 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.692982 kubelet[2704]: W1107 23:49:53.692945 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.692982 kubelet[2704]: E1107 23:49:53.692977 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.693386 kubelet[2704]: E1107 23:49:53.693190 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.693386 kubelet[2704]: W1107 23:49:53.693200 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.693386 kubelet[2704]: E1107 23:49:53.693234 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.693651 kubelet[2704]: E1107 23:49:53.693606 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.693651 kubelet[2704]: W1107 23:49:53.693625 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.693717 kubelet[2704]: E1107 23:49:53.693658 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.694038 kubelet[2704]: E1107 23:49:53.694016 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.694038 kubelet[2704]: W1107 23:49:53.694034 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.694109 kubelet[2704]: E1107 23:49:53.694046 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.694275 kubelet[2704]: E1107 23:49:53.694258 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.694275 kubelet[2704]: W1107 23:49:53.694272 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.694325 kubelet[2704]: E1107 23:49:53.694281 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.694463 kubelet[2704]: E1107 23:49:53.694450 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.694485 kubelet[2704]: W1107 23:49:53.694463 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.694485 kubelet[2704]: E1107 23:49:53.694472 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:53.706616 kubelet[2704]: E1107 23:49:53.706582 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:53.706616 kubelet[2704]: W1107 23:49:53.706604 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:53.706616 kubelet[2704]: E1107 23:49:53.706622 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:54.563447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988482658.mount: Deactivated successfully. Nov 7 23:49:55.148702 containerd[1559]: time="2025-11-07T23:49:55.148606082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:55.149283 containerd[1559]: time="2025-11-07T23:49:55.149097354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 7 23:49:55.150321 containerd[1559]: time="2025-11-07T23:49:55.150284740Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:55.152976 containerd[1559]: time="2025-11-07T23:49:55.152756015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:55.153346 containerd[1559]: time="2025-11-07T23:49:55.153324633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.595847839s" Nov 7 23:49:55.153385 containerd[1559]: time="2025-11-07T23:49:55.153354788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 7 23:49:55.155854 containerd[1559]: time="2025-11-07T23:49:55.155823143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 7 23:49:55.168097 containerd[1559]: time="2025-11-07T23:49:55.168048544Z" level=info msg="CreateContainer within sandbox \"f1a548f6ddba22289dbf86cadee9ec4610edefe6631d6c15b7c568be67fb1201\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 7 23:49:55.175668 containerd[1559]: time="2025-11-07T23:49:55.175576629Z" level=info msg="Container 57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:55.182484 containerd[1559]: time="2025-11-07T23:49:55.182429716Z" level=info msg="CreateContainer within sandbox \"f1a548f6ddba22289dbf86cadee9ec4610edefe6631d6c15b7c568be67fb1201\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819\"" Nov 7 23:49:55.184319 containerd[1559]: time="2025-11-07T23:49:55.184063382Z" level=info msg="StartContainer for \"57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819\"" Nov 7 23:49:55.185196 containerd[1559]: time="2025-11-07T23:49:55.185168183Z" level=info msg="connecting to shim 57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819" address="unix:///run/containerd/s/70a554ff7c96437e182a7b1622437db9b0e27a4114ec88aba67bbb2a3a96af80" protocol=ttrpc version=3 Nov 7 23:49:55.211866 systemd[1]: Started cri-containerd-57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819.scope - libcontainer container 57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819. Nov 7 23:49:55.252453 containerd[1559]: time="2025-11-07T23:49:55.252412722Z" level=info msg="StartContainer for \"57c7cc8647ff67d76d41607d594095171f28f94cf5761acdc72d0bfb9df3f819\" returns successfully" Nov 7 23:49:55.309504 kubelet[2704]: E1107 23:49:55.308891 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:49:55.403129 kubelet[2704]: E1107 23:49:55.402748 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:55.458214 kubelet[2704]: E1107 23:49:55.458161 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.458393 kubelet[2704]: W1107 23:49:55.458374 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.458513 kubelet[2704]: E1107 23:49:55.458498 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.459235 kubelet[2704]: E1107 23:49:55.458840 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.459235 kubelet[2704]: W1107 23:49:55.458883 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.459235 kubelet[2704]: E1107 23:49:55.458978 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.459572 kubelet[2704]: E1107 23:49:55.459554 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.460225 kubelet[2704]: W1107 23:49:55.459763 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.460225 kubelet[2704]: E1107 23:49:55.459803 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.460386 kubelet[2704]: E1107 23:49:55.460371 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.461218 kubelet[2704]: W1107 23:49:55.460524 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.461218 kubelet[2704]: E1107 23:49:55.460545 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.461387 kubelet[2704]: E1107 23:49:55.461370 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.461454 kubelet[2704]: W1107 23:49:55.461432 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.461584 kubelet[2704]: E1107 23:49:55.461541 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.462627 kubelet[2704]: E1107 23:49:55.462397 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.462627 kubelet[2704]: W1107 23:49:55.462417 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.462627 kubelet[2704]: E1107 23:49:55.462429 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.463743 kubelet[2704]: E1107 23:49:55.463176 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.463743 kubelet[2704]: W1107 23:49:55.463189 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.463743 kubelet[2704]: E1107 23:49:55.463201 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.463980 kubelet[2704]: E1107 23:49:55.463874 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.464228 kubelet[2704]: W1107 23:49:55.464115 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.464626 kubelet[2704]: E1107 23:49:55.464330 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.465005 kubelet[2704]: E1107 23:49:55.464893 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.465372 kubelet[2704]: W1107 23:49:55.465167 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.465372 kubelet[2704]: E1107 23:49:55.465190 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.465868 kubelet[2704]: E1107 23:49:55.465816 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.466474 kubelet[2704]: W1107 23:49:55.466074 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.466474 kubelet[2704]: E1107 23:49:55.466097 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.467313 kubelet[2704]: E1107 23:49:55.466943 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.467313 kubelet[2704]: W1107 23:49:55.467231 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.467313 kubelet[2704]: E1107 23:49:55.467244 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.467902 kubelet[2704]: E1107 23:49:55.467886 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.468356 kubelet[2704]: W1107 23:49:55.467959 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.468356 kubelet[2704]: E1107 23:49:55.467976 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.468872 kubelet[2704]: E1107 23:49:55.468701 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.469267 kubelet[2704]: W1107 23:49:55.469084 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.469366 kubelet[2704]: E1107 23:49:55.469350 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.470068 kubelet[2704]: E1107 23:49:55.470050 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.470225 kubelet[2704]: W1107 23:49:55.470200 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.470384 kubelet[2704]: E1107 23:49:55.470369 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.470958 kubelet[2704]: E1107 23:49:55.470754 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.470958 kubelet[2704]: W1107 23:49:55.470769 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.470958 kubelet[2704]: E1107 23:49:55.470781 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.497148 kubelet[2704]: E1107 23:49:55.497113 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.497148 kubelet[2704]: W1107 23:49:55.497137 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.497148 kubelet[2704]: E1107 23:49:55.497157 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.497418 kubelet[2704]: E1107 23:49:55.497384 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.497418 kubelet[2704]: W1107 23:49:55.497394 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.497418 kubelet[2704]: E1107 23:49:55.497403 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.497767 kubelet[2704]: E1107 23:49:55.497627 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.497767 kubelet[2704]: W1107 23:49:55.497665 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.497767 kubelet[2704]: E1107 23:49:55.497676 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.498526 kubelet[2704]: E1107 23:49:55.497926 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.498526 kubelet[2704]: W1107 23:49:55.497939 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.498526 kubelet[2704]: E1107 23:49:55.498294 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.499041 kubelet[2704]: E1107 23:49:55.499006 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.499041 kubelet[2704]: W1107 23:49:55.499037 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.499041 kubelet[2704]: E1107 23:49:55.499052 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.499499 kubelet[2704]: E1107 23:49:55.499353 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.499499 kubelet[2704]: W1107 23:49:55.499365 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.499499 kubelet[2704]: E1107 23:49:55.499375 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.500312 kubelet[2704]: E1107 23:49:55.499594 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.500312 kubelet[2704]: W1107 23:49:55.499621 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.500312 kubelet[2704]: E1107 23:49:55.499658 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.500312 kubelet[2704]: E1107 23:49:55.499880 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.500312 kubelet[2704]: W1107 23:49:55.499910 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.500312 kubelet[2704]: E1107 23:49:55.499921 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.500312 kubelet[2704]: E1107 23:49:55.500126 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.500312 kubelet[2704]: W1107 23:49:55.500154 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.500970 kubelet[2704]: E1107 23:49:55.500517 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.502102 kubelet[2704]: E1107 23:49:55.502081 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.502102 kubelet[2704]: W1107 23:49:55.502101 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.502279 kubelet[2704]: E1107 23:49:55.502116 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.502708 kubelet[2704]: E1107 23:49:55.502694 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.502708 kubelet[2704]: W1107 23:49:55.502707 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.502787 kubelet[2704]: E1107 23:49:55.502717 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.502894 kubelet[2704]: E1107 23:49:55.502872 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.502894 kubelet[2704]: W1107 23:49:55.502883 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.502894 kubelet[2704]: E1107 23:49:55.502890 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.503031 kubelet[2704]: E1107 23:49:55.503018 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.503073 kubelet[2704]: W1107 23:49:55.503029 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.503073 kubelet[2704]: E1107 23:49:55.503044 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.504764 kubelet[2704]: E1107 23:49:55.504742 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.504825 kubelet[2704]: W1107 23:49:55.504766 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.504825 kubelet[2704]: E1107 23:49:55.504780 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.505280 kubelet[2704]: E1107 23:49:55.505249 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.505280 kubelet[2704]: W1107 23:49:55.505267 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.505280 kubelet[2704]: E1107 23:49:55.505281 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.505562 kubelet[2704]: E1107 23:49:55.505480 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.505562 kubelet[2704]: W1107 23:49:55.505495 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.505562 kubelet[2704]: E1107 23:49:55.505505 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.505891 kubelet[2704]: E1107 23:49:55.505862 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.505891 kubelet[2704]: W1107 23:49:55.505887 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.505952 kubelet[2704]: E1107 23:49:55.505900 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:55.506100 kubelet[2704]: E1107 23:49:55.506083 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:55.506100 kubelet[2704]: W1107 23:49:55.506098 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:55.506151 kubelet[2704]: E1107 23:49:55.506107 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.379972 containerd[1559]: time="2025-11-07T23:49:56.379381109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:56.379972 containerd[1559]: time="2025-11-07T23:49:56.379900261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 7 23:49:56.380799 containerd[1559]: time="2025-11-07T23:49:56.380767275Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:56.382557 containerd[1559]: time="2025-11-07T23:49:56.382532577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:49:56.383389 containerd[1559]: time="2025-11-07T23:49:56.383257455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.227388639s" Nov 7 23:49:56.383389 containerd[1559]: time="2025-11-07T23:49:56.383292489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 7 23:49:56.387350 containerd[1559]: time="2025-11-07T23:49:56.387317370Z" level=info msg="CreateContainer within sandbox \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 7 23:49:56.395449 containerd[1559]: time="2025-11-07T23:49:56.395178084Z" level=info msg="Container 75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:49:56.404134 containerd[1559]: time="2025-11-07T23:49:56.404082622Z" level=info msg="CreateContainer within sandbox \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4\"" Nov 7 23:49:56.404496 containerd[1559]: time="2025-11-07T23:49:56.404461678Z" level=info msg="StartContainer for \"75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4\"" Nov 7 23:49:56.406380 containerd[1559]: time="2025-11-07T23:49:56.406245977Z" level=info msg="connecting to shim 75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4" address="unix:///run/containerd/s/4752746aac49383b525ce27b96fd2810a02524fa745c6e9cb0eaff77150f3ccb" protocol=ttrpc version=3 Nov 7 23:49:56.409271 kubelet[2704]: I1107 23:49:56.409244 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 7 23:49:56.410646 kubelet[2704]: E1107 23:49:56.410599 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:56.431808 systemd[1]: Started cri-containerd-75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4.scope - libcontainer container 75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4. Nov 7 23:49:56.477234 kubelet[2704]: E1107 23:49:56.477198 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.477234 kubelet[2704]: W1107 23:49:56.477228 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.477389 kubelet[2704]: E1107 23:49:56.477247 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.477485 kubelet[2704]: E1107 23:49:56.477447 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.477485 kubelet[2704]: W1107 23:49:56.477455 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.477485 kubelet[2704]: E1107 23:49:56.477464 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.477619 kubelet[2704]: E1107 23:49:56.477599 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.477619 kubelet[2704]: W1107 23:49:56.477618 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.477696 kubelet[2704]: E1107 23:49:56.477626 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.477786 kubelet[2704]: E1107 23:49:56.477776 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.477786 kubelet[2704]: W1107 23:49:56.477786 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.477841 kubelet[2704]: E1107 23:49:56.477794 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.477938 kubelet[2704]: E1107 23:49:56.477928 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.477969 kubelet[2704]: W1107 23:49:56.477938 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.477969 kubelet[2704]: E1107 23:49:56.477955 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478076 kubelet[2704]: E1107 23:49:56.478066 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478108 kubelet[2704]: W1107 23:49:56.478080 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478108 kubelet[2704]: E1107 23:49:56.478088 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478215 kubelet[2704]: E1107 23:49:56.478206 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478215 kubelet[2704]: W1107 23:49:56.478214 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478273 kubelet[2704]: E1107 23:49:56.478222 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478353 kubelet[2704]: E1107 23:49:56.478343 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478353 kubelet[2704]: W1107 23:49:56.478353 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478405 kubelet[2704]: E1107 23:49:56.478360 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478511 kubelet[2704]: E1107 23:49:56.478500 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478537 kubelet[2704]: W1107 23:49:56.478511 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478537 kubelet[2704]: E1107 23:49:56.478529 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478675 kubelet[2704]: E1107 23:49:56.478664 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478675 kubelet[2704]: W1107 23:49:56.478674 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478733 kubelet[2704]: E1107 23:49:56.478684 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478802 kubelet[2704]: E1107 23:49:56.478792 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478802 kubelet[2704]: W1107 23:49:56.478801 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478854 kubelet[2704]: E1107 23:49:56.478809 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.478941 kubelet[2704]: E1107 23:49:56.478932 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.478941 kubelet[2704]: W1107 23:49:56.478941 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.478991 kubelet[2704]: E1107 23:49:56.478949 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.479089 kubelet[2704]: E1107 23:49:56.479079 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.479119 kubelet[2704]: W1107 23:49:56.479090 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.479119 kubelet[2704]: E1107 23:49:56.479109 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.479230 kubelet[2704]: E1107 23:49:56.479220 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.479260 kubelet[2704]: W1107 23:49:56.479231 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.479260 kubelet[2704]: E1107 23:49:56.479239 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.479362 kubelet[2704]: E1107 23:49:56.479352 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.479362 kubelet[2704]: W1107 23:49:56.479361 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.479409 kubelet[2704]: E1107 23:49:56.479369 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.502653 containerd[1559]: time="2025-11-07T23:49:56.502463905Z" level=info msg="StartContainer for \"75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4\" returns successfully" Nov 7 23:49:56.502849 kubelet[2704]: E1107 23:49:56.502823 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.502849 kubelet[2704]: W1107 23:49:56.502849 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.502929 kubelet[2704]: E1107 23:49:56.502866 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.503103 kubelet[2704]: E1107 23:49:56.503079 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.503103 kubelet[2704]: W1107 23:49:56.503093 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.503103 kubelet[2704]: E1107 23:49:56.503104 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.503331 kubelet[2704]: E1107 23:49:56.503298 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.503331 kubelet[2704]: W1107 23:49:56.503306 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.503331 kubelet[2704]: E1107 23:49:56.503315 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.504749 kubelet[2704]: E1107 23:49:56.504680 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.504749 kubelet[2704]: W1107 23:49:56.504708 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.504749 kubelet[2704]: E1107 23:49:56.504722 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.504972 kubelet[2704]: E1107 23:49:56.504941 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.504972 kubelet[2704]: W1107 23:49:56.504954 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.504972 kubelet[2704]: E1107 23:49:56.504965 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.505298 kubelet[2704]: E1107 23:49:56.505217 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.505298 kubelet[2704]: W1107 23:49:56.505232 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.505298 kubelet[2704]: E1107 23:49:56.505243 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.505805 kubelet[2704]: E1107 23:49:56.505570 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.505805 kubelet[2704]: W1107 23:49:56.505594 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.505805 kubelet[2704]: E1107 23:49:56.505607 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.505927 kubelet[2704]: E1107 23:49:56.505878 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.505927 kubelet[2704]: W1107 23:49:56.505888 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.505927 kubelet[2704]: E1107 23:49:56.505898 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.506274 kubelet[2704]: E1107 23:49:56.506232 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.506274 kubelet[2704]: W1107 23:49:56.506246 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.506274 kubelet[2704]: E1107 23:49:56.506256 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.506543 kubelet[2704]: E1107 23:49:56.506525 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.506543 kubelet[2704]: W1107 23:49:56.506541 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.506665 kubelet[2704]: E1107 23:49:56.506554 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.506752 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.507688 kubelet[2704]: W1107 23:49:56.506764 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.506772 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.507054 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.507688 kubelet[2704]: W1107 23:49:56.507064 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.507072 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.507354 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.507688 kubelet[2704]: W1107 23:49:56.507362 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.507371 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.507688 kubelet[2704]: E1107 23:49:56.507519 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.507951 kubelet[2704]: W1107 23:49:56.507528 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.507951 kubelet[2704]: E1107 23:49:56.507535 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.507951 kubelet[2704]: E1107 23:49:56.507718 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.507951 kubelet[2704]: W1107 23:49:56.507726 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.507951 kubelet[2704]: E1107 23:49:56.507733 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.507951 kubelet[2704]: E1107 23:49:56.507860 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.507951 kubelet[2704]: W1107 23:49:56.507867 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.507951 kubelet[2704]: E1107 23:49:56.507874 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.508094 kubelet[2704]: E1107 23:49:56.508011 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.508094 kubelet[2704]: W1107 23:49:56.508019 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.508094 kubelet[2704]: E1107 23:49:56.508027 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.508454 kubelet[2704]: E1107 23:49:56.508434 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:49:56.508454 kubelet[2704]: W1107 23:49:56.508450 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:49:56.508454 kubelet[2704]: E1107 23:49:56.508461 2704 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:49:56.517921 systemd[1]: cri-containerd-75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4.scope: Deactivated successfully. Nov 7 23:49:56.552511 containerd[1559]: time="2025-11-07T23:49:56.552449793Z" level=info msg="received container exit event container_id:\"75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4\" id:\"75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4\" pid:3416 exited_at:{seconds:1762559396 nanos:547951351}" Nov 7 23:49:56.607824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75004b4ba5582cfd7133ee4edb39d109ecaa73b957fe0a44e82b1c558183b6f4-rootfs.mount: Deactivated successfully. Nov 7 23:49:57.308652 kubelet[2704]: E1107 23:49:57.308539 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:49:57.413897 kubelet[2704]: E1107 23:49:57.413731 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:49:57.416202 containerd[1559]: time="2025-11-07T23:49:57.415948301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 7 23:49:57.429979 kubelet[2704]: I1107 23:49:57.429858 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59854cd4c4-n7cr2" podStartSLOduration=2.832274994 podStartE2EDuration="4.429840464s" podCreationTimestamp="2025-11-07 23:49:53 +0000 UTC" firstStartedPulling="2025-11-07 23:49:53.556685316 +0000 UTC m=+25.356546625" lastFinishedPulling="2025-11-07 23:49:55.154250826 +0000 UTC m=+26.954112095" observedRunningTime="2025-11-07 23:49:55.424814579 +0000 UTC m=+27.224675888" watchObservedRunningTime="2025-11-07 23:49:57.429840464 +0000 UTC m=+29.229701773" Nov 7 23:49:59.309002 kubelet[2704]: E1107 23:49:59.308005 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:50:00.658378 containerd[1559]: time="2025-11-07T23:50:00.658334110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:50:00.660456 containerd[1559]: time="2025-11-07T23:50:00.660275897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 7 23:50:00.664654 containerd[1559]: time="2025-11-07T23:50:00.664609092Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:50:00.678864 containerd[1559]: time="2025-11-07T23:50:00.678809762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:50:00.679843 containerd[1559]: time="2025-11-07T23:50:00.679812031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.263825535s" Nov 7 23:50:00.679843 containerd[1559]: time="2025-11-07T23:50:00.679841068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 7 23:50:00.685387 containerd[1559]: time="2025-11-07T23:50:00.685349830Z" level=info msg="CreateContainer within sandbox \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 7 23:50:00.708765 containerd[1559]: time="2025-11-07T23:50:00.707299410Z" level=info msg="Container 7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:50:00.720910 containerd[1559]: time="2025-11-07T23:50:00.720858363Z" level=info msg="CreateContainer within sandbox \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2\"" Nov 7 23:50:00.721352 containerd[1559]: time="2025-11-07T23:50:00.721323382Z" level=info msg="StartContainer for \"7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2\"" Nov 7 23:50:00.723801 containerd[1559]: time="2025-11-07T23:50:00.723736228Z" level=info msg="connecting to shim 7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2" address="unix:///run/containerd/s/4752746aac49383b525ce27b96fd2810a02524fa745c6e9cb0eaff77150f3ccb" protocol=ttrpc version=3 Nov 7 23:50:00.748840 systemd[1]: Started cri-containerd-7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2.scope - libcontainer container 7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2. Nov 7 23:50:00.821197 containerd[1559]: time="2025-11-07T23:50:00.821081982Z" level=info msg="StartContainer for \"7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2\" returns successfully" Nov 7 23:50:01.308507 kubelet[2704]: E1107 23:50:01.308450 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:50:01.324290 systemd[1]: cri-containerd-7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2.scope: Deactivated successfully. Nov 7 23:50:01.324767 systemd[1]: cri-containerd-7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2.scope: Consumed 461ms CPU time, 178.3M memory peak, 2.7M read from disk, 165.9M written to disk. Nov 7 23:50:01.326416 containerd[1559]: time="2025-11-07T23:50:01.326372311Z" level=info msg="received container exit event container_id:\"7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2\" id:\"7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2\" pid:3509 exited_at:{seconds:1762559401 nanos:326178535}" Nov 7 23:50:01.345280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a6fcf3176c30d945845b7bc95bea60210ab5718f0af4074183146fb124168c2-rootfs.mount: Deactivated successfully. Nov 7 23:50:01.421139 kubelet[2704]: I1107 23:50:01.421100 2704 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 7 23:50:01.425910 kubelet[2704]: E1107 23:50:01.425871 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:01.525400 systemd[1]: Created slice kubepods-burstable-pod7ebc1790_987e_4353_8d75_b2b09f07e98c.slice - libcontainer container kubepods-burstable-pod7ebc1790_987e_4353_8d75_b2b09f07e98c.slice. Nov 7 23:50:01.534730 systemd[1]: Created slice kubepods-besteffort-podc6bfa01b_1c8c_4494_9a54_e48d2a2c5cec.slice - libcontainer container kubepods-besteffort-podc6bfa01b_1c8c_4494_9a54_e48d2a2c5cec.slice. Nov 7 23:50:01.539831 kubelet[2704]: I1107 23:50:01.539724 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ebc1790-987e-4353-8d75-b2b09f07e98c-config-volume\") pod \"coredns-66bc5c9577-mljdd\" (UID: \"7ebc1790-987e-4353-8d75-b2b09f07e98c\") " pod="kube-system/coredns-66bc5c9577-mljdd" Nov 7 23:50:01.539831 kubelet[2704]: I1107 23:50:01.539827 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8m7n\" (UniqueName: \"kubernetes.io/projected/7ebc1790-987e-4353-8d75-b2b09f07e98c-kube-api-access-x8m7n\") pod \"coredns-66bc5c9577-mljdd\" (UID: \"7ebc1790-987e-4353-8d75-b2b09f07e98c\") " pod="kube-system/coredns-66bc5c9577-mljdd" Nov 7 23:50:01.547128 systemd[1]: Created slice kubepods-besteffort-pod7c146669_da8e_492d_867a_402ab7ddcdae.slice - libcontainer container kubepods-besteffort-pod7c146669_da8e_492d_867a_402ab7ddcdae.slice. Nov 7 23:50:01.554975 systemd[1]: Created slice kubepods-besteffort-pod713c65e9_5ca4_4cc6_9849_2451c1fb60f7.slice - libcontainer container kubepods-besteffort-pod713c65e9_5ca4_4cc6_9849_2451c1fb60f7.slice. Nov 7 23:50:01.561975 systemd[1]: Created slice kubepods-burstable-pod949fc941_44a8_4b68_ab84_d05dd9327902.slice - libcontainer container kubepods-burstable-pod949fc941_44a8_4b68_ab84_d05dd9327902.slice. Nov 7 23:50:01.570307 systemd[1]: Created slice kubepods-besteffort-podc9a8041c_b786_4595_b025_c55df53faaff.slice - libcontainer container kubepods-besteffort-podc9a8041c_b786_4595_b025_c55df53faaff.slice. Nov 7 23:50:01.577533 systemd[1]: Created slice kubepods-besteffort-podaad4e7c3_33de_4281_bc94_dbb6680eeb54.slice - libcontainer container kubepods-besteffort-podaad4e7c3_33de_4281_bc94_dbb6680eeb54.slice. Nov 7 23:50:01.640435 kubelet[2704]: I1107 23:50:01.640332 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9b5\" (UniqueName: \"kubernetes.io/projected/949fc941-44a8-4b68-ab84-d05dd9327902-kube-api-access-rb9b5\") pod \"coredns-66bc5c9577-4gm4k\" (UID: \"949fc941-44a8-4b68-ab84-d05dd9327902\") " pod="kube-system/coredns-66bc5c9577-4gm4k" Nov 7 23:50:01.640435 kubelet[2704]: I1107 23:50:01.640404 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np82s\" (UniqueName: \"kubernetes.io/projected/7c146669-da8e-492d-867a-402ab7ddcdae-kube-api-access-np82s\") pod \"calico-apiserver-6b4fd87cbb-4qgj5\" (UID: \"7c146669-da8e-492d-867a-402ab7ddcdae\") " pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" Nov 7 23:50:01.640611 kubelet[2704]: I1107 23:50:01.640487 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/713c65e9-5ca4-4cc6-9849-2451c1fb60f7-calico-apiserver-certs\") pod \"calico-apiserver-6b4fd87cbb-lbv5q\" (UID: \"713c65e9-5ca4-4cc6-9849-2451c1fb60f7\") " pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" Nov 7 23:50:01.640611 kubelet[2704]: I1107 23:50:01.640521 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-backend-key-pair\") pod \"whisker-5dd5ffd45c-72ccf\" (UID: \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\") " pod="calico-system/whisker-5dd5ffd45c-72ccf" Nov 7 23:50:01.640611 kubelet[2704]: I1107 23:50:01.640559 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stxg\" (UniqueName: \"kubernetes.io/projected/c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec-kube-api-access-6stxg\") pod \"calico-kube-controllers-564977585f-zfthj\" (UID: \"c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec\") " pod="calico-system/calico-kube-controllers-564977585f-zfthj" Nov 7 23:50:01.640611 kubelet[2704]: I1107 23:50:01.640583 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a8041c-b786-4595-b025-c55df53faaff-config\") pod \"goldmane-7c778bb748-mxd8f\" (UID: \"c9a8041c-b786-4595-b025-c55df53faaff\") " pod="calico-system/goldmane-7c778bb748-mxd8f" Nov 7 23:50:01.640611 kubelet[2704]: I1107 23:50:01.640601 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2zh\" (UniqueName: \"kubernetes.io/projected/713c65e9-5ca4-4cc6-9849-2451c1fb60f7-kube-api-access-xb2zh\") pod \"calico-apiserver-6b4fd87cbb-lbv5q\" (UID: \"713c65e9-5ca4-4cc6-9849-2451c1fb60f7\") " pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" Nov 7 23:50:01.640780 kubelet[2704]: I1107 23:50:01.640623 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctqbt\" (UniqueName: \"kubernetes.io/projected/aad4e7c3-33de-4281-bc94-dbb6680eeb54-kube-api-access-ctqbt\") pod \"whisker-5dd5ffd45c-72ccf\" (UID: \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\") " pod="calico-system/whisker-5dd5ffd45c-72ccf" Nov 7 23:50:01.640780 kubelet[2704]: I1107 23:50:01.640670 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec-tigera-ca-bundle\") pod \"calico-kube-controllers-564977585f-zfthj\" (UID: \"c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec\") " pod="calico-system/calico-kube-controllers-564977585f-zfthj" Nov 7 23:50:01.640780 kubelet[2704]: I1107 23:50:01.640696 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c9a8041c-b786-4595-b025-c55df53faaff-goldmane-key-pair\") pod \"goldmane-7c778bb748-mxd8f\" (UID: \"c9a8041c-b786-4595-b025-c55df53faaff\") " pod="calico-system/goldmane-7c778bb748-mxd8f" Nov 7 23:50:01.640780 kubelet[2704]: I1107 23:50:01.640719 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9lv\" (UniqueName: \"kubernetes.io/projected/c9a8041c-b786-4595-b025-c55df53faaff-kube-api-access-5q9lv\") pod \"goldmane-7c778bb748-mxd8f\" (UID: \"c9a8041c-b786-4595-b025-c55df53faaff\") " pod="calico-system/goldmane-7c778bb748-mxd8f" Nov 7 23:50:01.640780 kubelet[2704]: I1107 23:50:01.640733 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-ca-bundle\") pod \"whisker-5dd5ffd45c-72ccf\" (UID: \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\") " pod="calico-system/whisker-5dd5ffd45c-72ccf" Nov 7 23:50:01.640891 kubelet[2704]: I1107 23:50:01.640755 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c146669-da8e-492d-867a-402ab7ddcdae-calico-apiserver-certs\") pod \"calico-apiserver-6b4fd87cbb-4qgj5\" (UID: \"7c146669-da8e-492d-867a-402ab7ddcdae\") " pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" Nov 7 23:50:01.640891 kubelet[2704]: I1107 23:50:01.640770 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9a8041c-b786-4595-b025-c55df53faaff-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-mxd8f\" (UID: \"c9a8041c-b786-4595-b025-c55df53faaff\") " pod="calico-system/goldmane-7c778bb748-mxd8f" Nov 7 23:50:01.640891 kubelet[2704]: I1107 23:50:01.640785 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949fc941-44a8-4b68-ab84-d05dd9327902-config-volume\") pod \"coredns-66bc5c9577-4gm4k\" (UID: \"949fc941-44a8-4b68-ab84-d05dd9327902\") " pod="kube-system/coredns-66bc5c9577-4gm4k" Nov 7 23:50:01.841580 containerd[1559]: time="2025-11-07T23:50:01.841135745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564977585f-zfthj,Uid:c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:01.842717 kubelet[2704]: E1107 23:50:01.842678 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:01.843206 containerd[1559]: time="2025-11-07T23:50:01.843153019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mljdd,Uid:7ebc1790-987e-4353-8d75-b2b09f07e98c,Namespace:kube-system,Attempt:0,}" Nov 7 23:50:01.852847 containerd[1559]: time="2025-11-07T23:50:01.852810799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-4qgj5,Uid:7c146669-da8e-492d-867a-402ab7ddcdae,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:50:01.860159 containerd[1559]: time="2025-11-07T23:50:01.859764710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-lbv5q,Uid:713c65e9-5ca4-4cc6-9849-2451c1fb60f7,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:50:01.868774 kubelet[2704]: E1107 23:50:01.868721 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:01.870808 containerd[1559]: time="2025-11-07T23:50:01.869646902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4gm4k,Uid:949fc941-44a8-4b68-ab84-d05dd9327902,Namespace:kube-system,Attempt:0,}" Nov 7 23:50:01.877784 containerd[1559]: time="2025-11-07T23:50:01.877738434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mxd8f,Uid:c9a8041c-b786-4595-b025-c55df53faaff,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:01.884604 containerd[1559]: time="2025-11-07T23:50:01.884542363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd5ffd45c-72ccf,Uid:aad4e7c3-33de-4281-bc94-dbb6680eeb54,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:01.970836 containerd[1559]: time="2025-11-07T23:50:01.970730753Z" level=error msg="Failed to destroy network for sandbox \"786d471edad2a1040faf706e39215b70d4f331d0c823155538770a7f10de1113\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.974216 containerd[1559]: time="2025-11-07T23:50:01.974173213Z" level=error msg="Failed to destroy network for sandbox \"94895377b1bf5b3d7b51e2c1db360efde300be2a91d7a8c265fa19f0375e859f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.974538 containerd[1559]: time="2025-11-07T23:50:01.974500093Z" level=error msg="Failed to destroy network for sandbox \"5cb7218c7b0e0dd3560207ee40113a60bac15a642cdf6fe53fd9887a95b8a5fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.976234 containerd[1559]: time="2025-11-07T23:50:01.976184847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd5ffd45c-72ccf,Uid:aad4e7c3-33de-4281-bc94-dbb6680eeb54,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"786d471edad2a1040faf706e39215b70d4f331d0c823155538770a7f10de1113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.976905 kubelet[2704]: E1107 23:50:01.976411 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"786d471edad2a1040faf706e39215b70d4f331d0c823155538770a7f10de1113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.976905 kubelet[2704]: E1107 23:50:01.976478 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"786d471edad2a1040faf706e39215b70d4f331d0c823155538770a7f10de1113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd5ffd45c-72ccf" Nov 7 23:50:01.976905 kubelet[2704]: E1107 23:50:01.976501 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"786d471edad2a1040faf706e39215b70d4f331d0c823155538770a7f10de1113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd5ffd45c-72ccf" Nov 7 23:50:01.977346 kubelet[2704]: E1107 23:50:01.976553 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dd5ffd45c-72ccf_calico-system(aad4e7c3-33de-4281-bc94-dbb6680eeb54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dd5ffd45c-72ccf_calico-system(aad4e7c3-33de-4281-bc94-dbb6680eeb54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"786d471edad2a1040faf706e39215b70d4f331d0c823155538770a7f10de1113\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd5ffd45c-72ccf" podUID="aad4e7c3-33de-4281-bc94-dbb6680eeb54" Nov 7 23:50:01.977408 containerd[1559]: time="2025-11-07T23:50:01.976880042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-lbv5q,Uid:713c65e9-5ca4-4cc6-9849-2451c1fb60f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94895377b1bf5b3d7b51e2c1db360efde300be2a91d7a8c265fa19f0375e859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.977449 kubelet[2704]: E1107 23:50:01.977344 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94895377b1bf5b3d7b51e2c1db360efde300be2a91d7a8c265fa19f0375e859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.977449 kubelet[2704]: E1107 23:50:01.977382 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94895377b1bf5b3d7b51e2c1db360efde300be2a91d7a8c265fa19f0375e859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" Nov 7 23:50:01.977449 kubelet[2704]: E1107 23:50:01.977400 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94895377b1bf5b3d7b51e2c1db360efde300be2a91d7a8c265fa19f0375e859f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" Nov 7 23:50:01.977997 kubelet[2704]: E1107 23:50:01.977523 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b4fd87cbb-lbv5q_calico-apiserver(713c65e9-5ca4-4cc6-9849-2451c1fb60f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b4fd87cbb-lbv5q_calico-apiserver(713c65e9-5ca4-4cc6-9849-2451c1fb60f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94895377b1bf5b3d7b51e2c1db360efde300be2a91d7a8c265fa19f0375e859f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" podUID="713c65e9-5ca4-4cc6-9849-2451c1fb60f7" Nov 7 23:50:01.978089 containerd[1559]: time="2025-11-07T23:50:01.977898558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mljdd,Uid:7ebc1790-987e-4353-8d75-b2b09f07e98c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7218c7b0e0dd3560207ee40113a60bac15a642cdf6fe53fd9887a95b8a5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.978131 kubelet[2704]: E1107 23:50:01.978051 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7218c7b0e0dd3560207ee40113a60bac15a642cdf6fe53fd9887a95b8a5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.978131 kubelet[2704]: E1107 23:50:01.978095 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7218c7b0e0dd3560207ee40113a60bac15a642cdf6fe53fd9887a95b8a5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mljdd" Nov 7 23:50:01.978131 kubelet[2704]: E1107 23:50:01.978112 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb7218c7b0e0dd3560207ee40113a60bac15a642cdf6fe53fd9887a95b8a5fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mljdd" Nov 7 23:50:01.978203 kubelet[2704]: E1107 23:50:01.978151 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mljdd_kube-system(7ebc1790-987e-4353-8d75-b2b09f07e98c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mljdd_kube-system(7ebc1790-987e-4353-8d75-b2b09f07e98c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cb7218c7b0e0dd3560207ee40113a60bac15a642cdf6fe53fd9887a95b8a5fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mljdd" podUID="7ebc1790-987e-4353-8d75-b2b09f07e98c" Nov 7 23:50:01.981300 containerd[1559]: time="2025-11-07T23:50:01.981233550Z" level=error msg="Failed to destroy network for sandbox \"658fa81ef6bb309af4b228b940f13044a587b0aa74df4af9591f4e52cf6fc67a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.983887 containerd[1559]: time="2025-11-07T23:50:01.983781679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4gm4k,Uid:949fc941-44a8-4b68-ab84-d05dd9327902,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"658fa81ef6bb309af4b228b940f13044a587b0aa74df4af9591f4e52cf6fc67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.984326 kubelet[2704]: E1107 23:50:01.984048 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"658fa81ef6bb309af4b228b940f13044a587b0aa74df4af9591f4e52cf6fc67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.984326 kubelet[2704]: E1107 23:50:01.984095 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"658fa81ef6bb309af4b228b940f13044a587b0aa74df4af9591f4e52cf6fc67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4gm4k" Nov 7 23:50:01.984326 kubelet[2704]: E1107 23:50:01.984116 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"658fa81ef6bb309af4b228b940f13044a587b0aa74df4af9591f4e52cf6fc67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4gm4k" Nov 7 23:50:01.984444 kubelet[2704]: E1107 23:50:01.984169 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4gm4k_kube-system(949fc941-44a8-4b68-ab84-d05dd9327902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4gm4k_kube-system(949fc941-44a8-4b68-ab84-d05dd9327902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"658fa81ef6bb309af4b228b940f13044a587b0aa74df4af9591f4e52cf6fc67a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4gm4k" podUID="949fc941-44a8-4b68-ab84-d05dd9327902" Nov 7 23:50:01.993128 containerd[1559]: time="2025-11-07T23:50:01.993027789Z" level=error msg="Failed to destroy network for sandbox \"f79be96437c2f9a62d14578122fde8c3a779aedb5600d4d67a4a68286672523b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.994107 containerd[1559]: time="2025-11-07T23:50:01.994073862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-4qgj5,Uid:7c146669-da8e-492d-867a-402ab7ddcdae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79be96437c2f9a62d14578122fde8c3a779aedb5600d4d67a4a68286672523b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.994445 kubelet[2704]: E1107 23:50:01.994411 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79be96437c2f9a62d14578122fde8c3a779aedb5600d4d67a4a68286672523b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.994500 kubelet[2704]: E1107 23:50:01.994462 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79be96437c2f9a62d14578122fde8c3a779aedb5600d4d67a4a68286672523b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" Nov 7 23:50:01.994500 kubelet[2704]: E1107 23:50:01.994480 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79be96437c2f9a62d14578122fde8c3a779aedb5600d4d67a4a68286672523b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" Nov 7 23:50:01.994545 kubelet[2704]: E1107 23:50:01.994528 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b4fd87cbb-4qgj5_calico-apiserver(7c146669-da8e-492d-867a-402ab7ddcdae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b4fd87cbb-4qgj5_calico-apiserver(7c146669-da8e-492d-867a-402ab7ddcdae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f79be96437c2f9a62d14578122fde8c3a779aedb5600d4d67a4a68286672523b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" podUID="7c146669-da8e-492d-867a-402ab7ddcdae" Nov 7 23:50:01.994856 containerd[1559]: time="2025-11-07T23:50:01.994776416Z" level=error msg="Failed to destroy network for sandbox \"ec523c8c8500ef6b0e112c425676c2f9a14a4692129292029853e5d2524ac3da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.995063 containerd[1559]: time="2025-11-07T23:50:01.995038744Z" level=error msg="Failed to destroy network for sandbox \"8d8d1f4e2283a1736cad1ab0d9b26b2dfb484f0814b519764be692f8312c0d08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.996190 containerd[1559]: time="2025-11-07T23:50:01.996161327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564977585f-zfthj,Uid:c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec523c8c8500ef6b0e112c425676c2f9a14a4692129292029853e5d2524ac3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.996525 kubelet[2704]: E1107 23:50:01.996496 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec523c8c8500ef6b0e112c425676c2f9a14a4692129292029853e5d2524ac3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.996582 kubelet[2704]: E1107 23:50:01.996537 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec523c8c8500ef6b0e112c425676c2f9a14a4692129292029853e5d2524ac3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564977585f-zfthj" Nov 7 23:50:01.996582 kubelet[2704]: E1107 23:50:01.996552 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec523c8c8500ef6b0e112c425676c2f9a14a4692129292029853e5d2524ac3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564977585f-zfthj" Nov 7 23:50:01.996718 kubelet[2704]: E1107 23:50:01.996593 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564977585f-zfthj_calico-system(c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564977585f-zfthj_calico-system(c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec523c8c8500ef6b0e112c425676c2f9a14a4692129292029853e5d2524ac3da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564977585f-zfthj" podUID="c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec" Nov 7 23:50:01.997155 containerd[1559]: time="2025-11-07T23:50:01.997045619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mxd8f,Uid:c9a8041c-b786-4595-b025-c55df53faaff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d1f4e2283a1736cad1ab0d9b26b2dfb484f0814b519764be692f8312c0d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.997423 kubelet[2704]: E1107 23:50:01.997403 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d1f4e2283a1736cad1ab0d9b26b2dfb484f0814b519764be692f8312c0d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:01.997479 kubelet[2704]: E1107 23:50:01.997453 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d1f4e2283a1736cad1ab0d9b26b2dfb484f0814b519764be692f8312c0d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mxd8f" Nov 7 23:50:01.997479 kubelet[2704]: E1107 23:50:01.997470 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d8d1f4e2283a1736cad1ab0d9b26b2dfb484f0814b519764be692f8312c0d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mxd8f" Nov 7 23:50:01.997545 kubelet[2704]: E1107 23:50:01.997521 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mxd8f_calico-system(c9a8041c-b786-4595-b025-c55df53faaff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mxd8f_calico-system(c9a8041c-b786-4595-b025-c55df53faaff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d8d1f4e2283a1736cad1ab0d9b26b2dfb484f0814b519764be692f8312c0d08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mxd8f" podUID="c9a8041c-b786-4595-b025-c55df53faaff" Nov 7 23:50:02.431048 kubelet[2704]: E1107 23:50:02.431019 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:02.432274 containerd[1559]: time="2025-11-07T23:50:02.432202052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 7 23:50:03.314225 systemd[1]: Created slice kubepods-besteffort-podcd5bfb52_b349_4f3d_ac38_78d6f47e1f8c.slice - libcontainer container kubepods-besteffort-podcd5bfb52_b349_4f3d_ac38_78d6f47e1f8c.slice. Nov 7 23:50:03.319681 containerd[1559]: time="2025-11-07T23:50:03.319603493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxrk8,Uid:cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:03.385218 containerd[1559]: time="2025-11-07T23:50:03.385165714Z" level=error msg="Failed to destroy network for sandbox \"9fe59fb2193aaf14acb127fc89d611dae333b205cf5d6456b4d5eee796bfa68b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:03.388320 systemd[1]: run-netns-cni\x2ddfd1384e\x2dd745\x2ddfe4\x2dfc47\x2de8c46328245e.mount: Deactivated successfully. Nov 7 23:50:03.389193 containerd[1559]: time="2025-11-07T23:50:03.389129231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxrk8,Uid:cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe59fb2193aaf14acb127fc89d611dae333b205cf5d6456b4d5eee796bfa68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:03.389565 kubelet[2704]: E1107 23:50:03.389477 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe59fb2193aaf14acb127fc89d611dae333b205cf5d6456b4d5eee796bfa68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:50:03.389744 kubelet[2704]: E1107 23:50:03.389690 2704 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe59fb2193aaf14acb127fc89d611dae333b205cf5d6456b4d5eee796bfa68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:50:03.389915 kubelet[2704]: E1107 23:50:03.389802 2704 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe59fb2193aaf14acb127fc89d611dae333b205cf5d6456b4d5eee796bfa68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxrk8" Nov 7 23:50:03.390134 kubelet[2704]: E1107 23:50:03.390025 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nxrk8_calico-system(cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nxrk8_calico-system(cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fe59fb2193aaf14acb127fc89d611dae333b205cf5d6456b4d5eee796bfa68b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:50:06.464951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79095551.mount: Deactivated successfully. Nov 7 23:50:06.540548 containerd[1559]: time="2025-11-07T23:50:06.540493908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.108222064s" Nov 7 23:50:06.540548 containerd[1559]: time="2025-11-07T23:50:06.540550866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 7 23:50:06.554499 containerd[1559]: time="2025-11-07T23:50:06.537167674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 7 23:50:06.554499 containerd[1559]: time="2025-11-07T23:50:06.553421499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:50:06.555662 containerd[1559]: time="2025-11-07T23:50:06.555276909Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:50:06.555863 containerd[1559]: time="2025-11-07T23:50:06.555837927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:50:06.558908 containerd[1559]: time="2025-11-07T23:50:06.558865093Z" level=info msg="CreateContainer within sandbox \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 7 23:50:06.584433 containerd[1559]: time="2025-11-07T23:50:06.584376847Z" level=info msg="Container a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:50:06.593994 containerd[1559]: time="2025-11-07T23:50:06.593928206Z" level=info msg="CreateContainer within sandbox \"1be0b8a810bf45eccb339a2ec26d9aebb01281020d0322bfa3ace884c8ae27f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3\"" Nov 7 23:50:06.594795 containerd[1559]: time="2025-11-07T23:50:06.594763774Z" level=info msg="StartContainer for \"a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3\"" Nov 7 23:50:06.596571 containerd[1559]: time="2025-11-07T23:50:06.596538827Z" level=info msg="connecting to shim a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3" address="unix:///run/containerd/s/4752746aac49383b525ce27b96fd2810a02524fa745c6e9cb0eaff77150f3ccb" protocol=ttrpc version=3 Nov 7 23:50:06.616825 systemd[1]: Started cri-containerd-a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3.scope - libcontainer container a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3. Nov 7 23:50:06.699868 containerd[1559]: time="2025-11-07T23:50:06.699827398Z" level=info msg="StartContainer for \"a145056964f943e74d04b08d98e0b07fafa231acfbab46efab5cf42892d45aa3\" returns successfully" Nov 7 23:50:06.827503 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 7 23:50:06.827709 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 7 23:50:07.076095 kubelet[2704]: I1107 23:50:07.076015 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-backend-key-pair\") pod \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\" (UID: \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\") " Nov 7 23:50:07.078046 kubelet[2704]: I1107 23:50:07.076184 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctqbt\" (UniqueName: \"kubernetes.io/projected/aad4e7c3-33de-4281-bc94-dbb6680eeb54-kube-api-access-ctqbt\") pod \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\" (UID: \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\") " Nov 7 23:50:07.078046 kubelet[2704]: I1107 23:50:07.076214 2704 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-ca-bundle\") pod \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\" (UID: \"aad4e7c3-33de-4281-bc94-dbb6680eeb54\") " Nov 7 23:50:07.099424 kubelet[2704]: I1107 23:50:07.099360 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad4e7c3-33de-4281-bc94-dbb6680eeb54-kube-api-access-ctqbt" (OuterVolumeSpecName: "kube-api-access-ctqbt") pod "aad4e7c3-33de-4281-bc94-dbb6680eeb54" (UID: "aad4e7c3-33de-4281-bc94-dbb6680eeb54"). InnerVolumeSpecName "kube-api-access-ctqbt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 7 23:50:07.099687 kubelet[2704]: I1107 23:50:07.099358 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aad4e7c3-33de-4281-bc94-dbb6680eeb54" (UID: "aad4e7c3-33de-4281-bc94-dbb6680eeb54"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 7 23:50:07.101308 kubelet[2704]: I1107 23:50:07.101265 2704 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aad4e7c3-33de-4281-bc94-dbb6680eeb54" (UID: "aad4e7c3-33de-4281-bc94-dbb6680eeb54"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 7 23:50:07.179009 kubelet[2704]: I1107 23:50:07.178953 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ctqbt\" (UniqueName: \"kubernetes.io/projected/aad4e7c3-33de-4281-bc94-dbb6680eeb54-kube-api-access-ctqbt\") on node \"localhost\" DevicePath \"\"" Nov 7 23:50:07.179009 kubelet[2704]: I1107 23:50:07.178996 2704 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 7 23:50:07.179009 kubelet[2704]: I1107 23:50:07.179006 2704 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aad4e7c3-33de-4281-bc94-dbb6680eeb54-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 7 23:50:07.446011 kubelet[2704]: E1107 23:50:07.445537 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:07.451876 systemd[1]: Removed slice kubepods-besteffort-podaad4e7c3_33de_4281_bc94_dbb6680eeb54.slice - libcontainer container kubepods-besteffort-podaad4e7c3_33de_4281_bc94_dbb6680eeb54.slice. Nov 7 23:50:07.468593 systemd[1]: var-lib-kubelet-pods-aad4e7c3\x2d33de\x2d4281\x2dbc94\x2ddbb6680eeb54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dctqbt.mount: Deactivated successfully. Nov 7 23:50:07.468975 systemd[1]: var-lib-kubelet-pods-aad4e7c3\x2d33de\x2d4281\x2dbc94\x2ddbb6680eeb54-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 7 23:50:07.480492 kubelet[2704]: I1107 23:50:07.479571 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mhp7h" podStartSLOduration=1.618763562 podStartE2EDuration="14.468779461s" podCreationTimestamp="2025-11-07 23:49:53 +0000 UTC" firstStartedPulling="2025-11-07 23:49:53.691162263 +0000 UTC m=+25.491023572" lastFinishedPulling="2025-11-07 23:50:06.541178202 +0000 UTC m=+38.341039471" observedRunningTime="2025-11-07 23:50:07.46688769 +0000 UTC m=+39.266749039" watchObservedRunningTime="2025-11-07 23:50:07.468779461 +0000 UTC m=+39.268640770" Nov 7 23:50:07.531751 systemd[1]: Created slice kubepods-besteffort-pod04fb3d41_4a90_455a_820d_9b24bda7bc24.slice - libcontainer container kubepods-besteffort-pod04fb3d41_4a90_455a_820d_9b24bda7bc24.slice. Nov 7 23:50:07.582703 kubelet[2704]: I1107 23:50:07.582647 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xdb\" (UniqueName: \"kubernetes.io/projected/04fb3d41-4a90-455a-820d-9b24bda7bc24-kube-api-access-g4xdb\") pod \"whisker-5b67dc9485-t4fjt\" (UID: \"04fb3d41-4a90-455a-820d-9b24bda7bc24\") " pod="calico-system/whisker-5b67dc9485-t4fjt" Nov 7 23:50:07.582703 kubelet[2704]: I1107 23:50:07.582706 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04fb3d41-4a90-455a-820d-9b24bda7bc24-whisker-backend-key-pair\") pod \"whisker-5b67dc9485-t4fjt\" (UID: \"04fb3d41-4a90-455a-820d-9b24bda7bc24\") " pod="calico-system/whisker-5b67dc9485-t4fjt" Nov 7 23:50:07.582867 kubelet[2704]: I1107 23:50:07.582736 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04fb3d41-4a90-455a-820d-9b24bda7bc24-whisker-ca-bundle\") pod \"whisker-5b67dc9485-t4fjt\" (UID: \"04fb3d41-4a90-455a-820d-9b24bda7bc24\") " pod="calico-system/whisker-5b67dc9485-t4fjt" Nov 7 23:50:07.856344 containerd[1559]: time="2025-11-07T23:50:07.856302597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b67dc9485-t4fjt,Uid:04fb3d41-4a90-455a-820d-9b24bda7bc24,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:08.023530 systemd-networkd[1471]: cali5ea4ec045bf: Link UP Nov 7 23:50:08.024094 systemd-networkd[1471]: cali5ea4ec045bf: Gained carrier Nov 7 23:50:08.036570 containerd[1559]: 2025-11-07 23:50:07.892 [INFO][3908] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:50:08.036570 containerd[1559]: 2025-11-07 23:50:07.923 [INFO][3908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b67dc9485--t4fjt-eth0 whisker-5b67dc9485- calico-system 04fb3d41-4a90-455a-820d-9b24bda7bc24 913 0 2025-11-07 23:50:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b67dc9485 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b67dc9485-t4fjt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5ea4ec045bf [] [] }} ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-" Nov 7 23:50:08.036570 containerd[1559]: 2025-11-07 23:50:07.924 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.036570 containerd[1559]: 2025-11-07 23:50:07.982 [INFO][3922] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" HandleID="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Workload="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:07.982 [INFO][3922] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" HandleID="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Workload="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b67dc9485-t4fjt", "timestamp":"2025-11-07 23:50:07.982155764 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:07.982 [INFO][3922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:07.982 [INFO][3922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:07.982 [INFO][3922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:07.992 [INFO][3922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" host="localhost" Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:07.997 [INFO][3922] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:08.001 [INFO][3922] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:08.003 [INFO][3922] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:08.005 [INFO][3922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:08.036802 containerd[1559]: 2025-11-07 23:50:08.005 [INFO][3922] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" host="localhost" Nov 7 23:50:08.036991 containerd[1559]: 2025-11-07 23:50:08.006 [INFO][3922] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f Nov 7 23:50:08.036991 containerd[1559]: 2025-11-07 23:50:08.010 [INFO][3922] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" host="localhost" Nov 7 23:50:08.036991 containerd[1559]: 2025-11-07 23:50:08.014 [INFO][3922] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" host="localhost" Nov 7 23:50:08.036991 containerd[1559]: 2025-11-07 23:50:08.015 [INFO][3922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" host="localhost" Nov 7 23:50:08.036991 containerd[1559]: 2025-11-07 23:50:08.015 [INFO][3922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:08.036991 containerd[1559]: 2025-11-07 23:50:08.015 [INFO][3922] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" HandleID="k8s-pod-network.0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Workload="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.037154 containerd[1559]: 2025-11-07 23:50:08.017 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b67dc9485--t4fjt-eth0", GenerateName:"whisker-5b67dc9485-", Namespace:"calico-system", SelfLink:"", UID:"04fb3d41-4a90-455a-820d-9b24bda7bc24", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 50, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b67dc9485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b67dc9485-t4fjt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ea4ec045bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:08.037154 containerd[1559]: 2025-11-07 23:50:08.017 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.037232 containerd[1559]: 2025-11-07 23:50:08.017 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ea4ec045bf ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.037232 containerd[1559]: 2025-11-07 23:50:08.023 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.037269 containerd[1559]: 2025-11-07 23:50:08.024 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b67dc9485--t4fjt-eth0", GenerateName:"whisker-5b67dc9485-", Namespace:"calico-system", SelfLink:"", UID:"04fb3d41-4a90-455a-820d-9b24bda7bc24", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 50, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b67dc9485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f", Pod:"whisker-5b67dc9485-t4fjt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ea4ec045bf", MAC:"7a:c8:07:9f:05:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:08.037313 containerd[1559]: 2025-11-07 23:50:08.033 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" Namespace="calico-system" Pod="whisker-5b67dc9485-t4fjt" WorkloadEndpoint="localhost-k8s-whisker--5b67dc9485--t4fjt-eth0" Nov 7 23:50:08.108161 containerd[1559]: time="2025-11-07T23:50:08.107666332Z" level=info msg="connecting to shim 0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f" address="unix:///run/containerd/s/c1275353e54ba8e777a7e0b6287254700d03fcc15167018e4d2d27c96772c2c4" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:08.133815 systemd[1]: Started cri-containerd-0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f.scope - libcontainer container 0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f. Nov 7 23:50:08.146276 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:08.198485 containerd[1559]: time="2025-11-07T23:50:08.198436962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b67dc9485-t4fjt,Uid:04fb3d41-4a90-455a-820d-9b24bda7bc24,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d7195a81af9a7d06bed1c12cc2b0bd1c17e7e4da877c92d8cd0eddf8b51809f\"" Nov 7 23:50:08.201705 containerd[1559]: time="2025-11-07T23:50:08.201659927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 7 23:50:08.218928 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:36426.service - OpenSSH per-connection server daemon (10.0.0.1:36426). Nov 7 23:50:08.306775 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 36426 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:08.308868 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:08.314711 kubelet[2704]: I1107 23:50:08.314031 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad4e7c3-33de-4281-bc94-dbb6680eeb54" path="/var/lib/kubelet/pods/aad4e7c3-33de-4281-bc94-dbb6680eeb54/volumes" Nov 7 23:50:08.315212 systemd-logind[1543]: New session 8 of user core. Nov 7 23:50:08.321963 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 7 23:50:08.423207 containerd[1559]: time="2025-11-07T23:50:08.423067480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:08.424404 containerd[1559]: time="2025-11-07T23:50:08.424356394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 7 23:50:08.425884 containerd[1559]: time="2025-11-07T23:50:08.424445951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 7 23:50:08.425933 kubelet[2704]: E1107 23:50:08.424600 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:50:08.429335 kubelet[2704]: E1107 23:50:08.429279 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:50:08.429720 kubelet[2704]: E1107 23:50:08.429438 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b67dc9485-t4fjt_calico-system(04fb3d41-4a90-455a-820d-9b24bda7bc24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:08.430556 containerd[1559]: time="2025-11-07T23:50:08.430502774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 7 23:50:08.455895 kubelet[2704]: E1107 23:50:08.454996 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:08.496934 sshd[4086]: Connection closed by 10.0.0.1 port 36426 Nov 7 23:50:08.497460 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:08.502071 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:36426.service: Deactivated successfully. Nov 7 23:50:08.503990 systemd[1]: session-8.scope: Deactivated successfully. Nov 7 23:50:08.504850 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Nov 7 23:50:08.506218 systemd-logind[1543]: Removed session 8. Nov 7 23:50:08.636774 containerd[1559]: time="2025-11-07T23:50:08.636718111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:08.638083 containerd[1559]: time="2025-11-07T23:50:08.638033544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 7 23:50:08.638164 containerd[1559]: time="2025-11-07T23:50:08.638127741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 7 23:50:08.638324 kubelet[2704]: E1107 23:50:08.638262 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:50:08.638324 kubelet[2704]: E1107 23:50:08.638313 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:50:08.638709 kubelet[2704]: E1107 23:50:08.638398 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b67dc9485-t4fjt_calico-system(04fb3d41-4a90-455a-820d-9b24bda7bc24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:08.638709 kubelet[2704]: E1107 23:50:08.638442 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b67dc9485-t4fjt" podUID="04fb3d41-4a90-455a-820d-9b24bda7bc24" Nov 7 23:50:09.221787 systemd-networkd[1471]: cali5ea4ec045bf: Gained IPv6LL Nov 7 23:50:09.455787 kubelet[2704]: E1107 23:50:09.455748 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:09.457555 kubelet[2704]: E1107 23:50:09.457352 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b67dc9485-t4fjt" podUID="04fb3d41-4a90-455a-820d-9b24bda7bc24" Nov 7 23:50:13.310469 containerd[1559]: time="2025-11-07T23:50:13.310411265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mxd8f,Uid:c9a8041c-b786-4595-b025-c55df53faaff,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:13.421389 systemd-networkd[1471]: califf1b23b1f70: Link UP Nov 7 23:50:13.421773 systemd-networkd[1471]: califf1b23b1f70: Gained carrier Nov 7 23:50:13.440827 containerd[1559]: 2025-11-07 23:50:13.330 [INFO][4270] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:50:13.440827 containerd[1559]: 2025-11-07 23:50:13.344 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--mxd8f-eth0 goldmane-7c778bb748- calico-system c9a8041c-b786-4595-b025-c55df53faaff 839 0 2025-11-07 23:49:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-mxd8f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califf1b23b1f70 [] [] }} ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-" Nov 7 23:50:13.440827 containerd[1559]: 2025-11-07 23:50:13.344 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.440827 containerd[1559]: 2025-11-07 23:50:13.374 [INFO][4285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" HandleID="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Workload="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.374 [INFO][4285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" HandleID="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Workload="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005924b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-mxd8f", "timestamp":"2025-11-07 23:50:13.374013719 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.374 [INFO][4285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.374 [INFO][4285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.374 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.386 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" host="localhost" Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.390 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.395 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.400 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.402 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:13.441047 containerd[1559]: 2025-11-07 23:50:13.402 [INFO][4285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" host="localhost" Nov 7 23:50:13.441260 containerd[1559]: 2025-11-07 23:50:13.405 [INFO][4285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713 Nov 7 23:50:13.441260 containerd[1559]: 2025-11-07 23:50:13.409 [INFO][4285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" host="localhost" Nov 7 23:50:13.441260 containerd[1559]: 2025-11-07 23:50:13.414 [INFO][4285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" host="localhost" Nov 7 23:50:13.441260 containerd[1559]: 2025-11-07 23:50:13.414 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" host="localhost" Nov 7 23:50:13.441260 containerd[1559]: 2025-11-07 23:50:13.414 [INFO][4285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:13.441260 containerd[1559]: 2025-11-07 23:50:13.414 [INFO][4285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" HandleID="k8s-pod-network.fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Workload="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.441369 containerd[1559]: 2025-11-07 23:50:13.418 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--mxd8f-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c9a8041c-b786-4595-b025-c55df53faaff", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-mxd8f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califf1b23b1f70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:13.441369 containerd[1559]: 2025-11-07 23:50:13.418 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.441437 containerd[1559]: 2025-11-07 23:50:13.418 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf1b23b1f70 ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.441437 containerd[1559]: 2025-11-07 23:50:13.421 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.441480 containerd[1559]: 2025-11-07 23:50:13.422 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--mxd8f-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c9a8041c-b786-4595-b025-c55df53faaff", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713", Pod:"goldmane-7c778bb748-mxd8f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califf1b23b1f70", MAC:"ca:17:fe:38:7e:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:13.441525 containerd[1559]: 2025-11-07 23:50:13.436 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" Namespace="calico-system" Pod="goldmane-7c778bb748-mxd8f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mxd8f-eth0" Nov 7 23:50:13.473840 containerd[1559]: time="2025-11-07T23:50:13.473795885Z" level=info msg="connecting to shim fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713" address="unix:///run/containerd/s/b0ca5f5c15af49d85725b62fdc61036b476ddbd0ff4d24d02a452bc0aad20d1b" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:13.499802 systemd[1]: Started cri-containerd-fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713.scope - libcontainer container fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713. Nov 7 23:50:13.505842 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:42526.service - OpenSSH per-connection server daemon (10.0.0.1:42526). Nov 7 23:50:13.512985 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:13.537881 containerd[1559]: time="2025-11-07T23:50:13.537769447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mxd8f,Uid:c9a8041c-b786-4595-b025-c55df53faaff,Namespace:calico-system,Attempt:0,} returns sandbox id \"fac79ef255327db8c28206c0096b483a7b9cec4d2aa9fc4bb7056f5c044f5713\"" Nov 7 23:50:13.540115 containerd[1559]: time="2025-11-07T23:50:13.540093935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 7 23:50:13.569255 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 42526 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:13.570422 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:13.574905 systemd-logind[1543]: New session 9 of user core. Nov 7 23:50:13.588841 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 7 23:50:13.755489 sshd[4350]: Connection closed by 10.0.0.1 port 42526 Nov 7 23:50:13.756109 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:13.760548 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:42526.service: Deactivated successfully. Nov 7 23:50:13.762315 systemd[1]: session-9.scope: Deactivated successfully. Nov 7 23:50:13.763179 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Nov 7 23:50:13.764106 systemd-logind[1543]: Removed session 9. Nov 7 23:50:13.825508 containerd[1559]: time="2025-11-07T23:50:13.825311191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:13.831025 containerd[1559]: time="2025-11-07T23:50:13.830968094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 7 23:50:13.831121 containerd[1559]: time="2025-11-07T23:50:13.831035092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 7 23:50:13.831248 kubelet[2704]: E1107 23:50:13.831211 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:50:13.831698 kubelet[2704]: E1107 23:50:13.831258 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:50:13.831698 kubelet[2704]: E1107 23:50:13.831336 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mxd8f_calico-system(c9a8041c-b786-4595-b025-c55df53faaff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:13.831698 kubelet[2704]: E1107 23:50:13.831364 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mxd8f" podUID="c9a8041c-b786-4595-b025-c55df53faaff" Nov 7 23:50:14.310617 kubelet[2704]: E1107 23:50:14.310554 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:14.311225 containerd[1559]: time="2025-11-07T23:50:14.311170321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4gm4k,Uid:949fc941-44a8-4b68-ab84-d05dd9327902,Namespace:kube-system,Attempt:0,}" Nov 7 23:50:14.421550 systemd-networkd[1471]: cali2813ea61fc6: Link UP Nov 7 23:50:14.421816 systemd-networkd[1471]: cali2813ea61fc6: Gained carrier Nov 7 23:50:14.435217 containerd[1559]: 2025-11-07 23:50:14.334 [INFO][4390] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:50:14.435217 containerd[1559]: 2025-11-07 23:50:14.352 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--4gm4k-eth0 coredns-66bc5c9577- kube-system 949fc941-44a8-4b68-ab84-d05dd9327902 838 0 2025-11-07 23:49:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-4gm4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2813ea61fc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-" Nov 7 23:50:14.435217 containerd[1559]: 2025-11-07 23:50:14.352 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.435217 containerd[1559]: 2025-11-07 23:50:14.376 [INFO][4404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" HandleID="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Workload="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.376 [INFO][4404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" HandleID="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Workload="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-4gm4k", "timestamp":"2025-11-07 23:50:14.376792128 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.377 [INFO][4404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.377 [INFO][4404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.377 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.387 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" host="localhost" Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.393 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.398 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.401 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.403 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:14.435427 containerd[1559]: 2025-11-07 23:50:14.403 [INFO][4404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" host="localhost" Nov 7 23:50:14.435712 containerd[1559]: 2025-11-07 23:50:14.405 [INFO][4404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312 Nov 7 23:50:14.435712 containerd[1559]: 2025-11-07 23:50:14.410 [INFO][4404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" host="localhost" Nov 7 23:50:14.435712 containerd[1559]: 2025-11-07 23:50:14.417 [INFO][4404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" host="localhost" Nov 7 23:50:14.435712 containerd[1559]: 2025-11-07 23:50:14.417 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" host="localhost" Nov 7 23:50:14.435712 containerd[1559]: 2025-11-07 23:50:14.417 [INFO][4404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:14.435712 containerd[1559]: 2025-11-07 23:50:14.417 [INFO][4404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" HandleID="k8s-pod-network.7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Workload="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.435833 containerd[1559]: 2025-11-07 23:50:14.419 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4gm4k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"949fc941-44a8-4b68-ab84-d05dd9327902", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-4gm4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2813ea61fc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:14.435833 containerd[1559]: 2025-11-07 23:50:14.419 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.435833 containerd[1559]: 2025-11-07 23:50:14.419 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2813ea61fc6 ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.435833 containerd[1559]: 2025-11-07 23:50:14.422 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.435833 containerd[1559]: 2025-11-07 23:50:14.422 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4gm4k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"949fc941-44a8-4b68-ab84-d05dd9327902", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312", Pod:"coredns-66bc5c9577-4gm4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2813ea61fc6", MAC:"ea:79:b6:c6:a5:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:14.435833 containerd[1559]: 2025-11-07 23:50:14.433 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" Namespace="kube-system" Pod="coredns-66bc5c9577-4gm4k" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4gm4k-eth0" Nov 7 23:50:14.467920 containerd[1559]: time="2025-11-07T23:50:14.467875320Z" level=info msg="connecting to shim 7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312" address="unix:///run/containerd/s/1551d056746bd148ca406c7e6db7017550cd8d314cdefe1cc43e5fec1a138367" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:14.475651 kubelet[2704]: E1107 23:50:14.471605 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mxd8f" podUID="c9a8041c-b786-4595-b025-c55df53faaff" Nov 7 23:50:14.494031 systemd[1]: Started cri-containerd-7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312.scope - libcontainer container 7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312. Nov 7 23:50:14.515018 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:14.556042 containerd[1559]: time="2025-11-07T23:50:14.554759920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4gm4k,Uid:949fc941-44a8-4b68-ab84-d05dd9327902,Namespace:kube-system,Attempt:0,} returns sandbox id \"7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312\"" Nov 7 23:50:14.556284 kubelet[2704]: E1107 23:50:14.555752 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:14.560090 containerd[1559]: time="2025-11-07T23:50:14.560048279Z" level=info msg="CreateContainer within sandbox \"7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 7 23:50:14.580323 containerd[1559]: time="2025-11-07T23:50:14.578972664Z" level=info msg="Container c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:50:14.582142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount459206420.mount: Deactivated successfully. Nov 7 23:50:14.589189 containerd[1559]: time="2025-11-07T23:50:14.589130516Z" level=info msg="CreateContainer within sandbox \"7947ac0e898d7a686cc5c9fd03b381c0e750a5d33d7389a96ee0c0474b231312\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83\"" Nov 7 23:50:14.589883 containerd[1559]: time="2025-11-07T23:50:14.589840734Z" level=info msg="StartContainer for \"c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83\"" Nov 7 23:50:14.591344 containerd[1559]: time="2025-11-07T23:50:14.591199293Z" level=info msg="connecting to shim c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83" address="unix:///run/containerd/s/1551d056746bd148ca406c7e6db7017550cd8d314cdefe1cc43e5fec1a138367" protocol=ttrpc version=3 Nov 7 23:50:14.613859 systemd[1]: Started cri-containerd-c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83.scope - libcontainer container c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83. Nov 7 23:50:14.644306 containerd[1559]: time="2025-11-07T23:50:14.644266840Z" level=info msg="StartContainer for \"c1a8b872c2774306b9f70e4c1d2d879f0cc4404600b2ba3793a645e5fb89ca83\" returns successfully" Nov 7 23:50:15.301836 systemd-networkd[1471]: califf1b23b1f70: Gained IPv6LL Nov 7 23:50:15.319353 containerd[1559]: time="2025-11-07T23:50:15.319310106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564977585f-zfthj,Uid:c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:15.321735 containerd[1559]: time="2025-11-07T23:50:15.321696195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxrk8,Uid:cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c,Namespace:calico-system,Attempt:0,}" Nov 7 23:50:15.323446 containerd[1559]: time="2025-11-07T23:50:15.323398905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-lbv5q,Uid:713c65e9-5ca4-4cc6-9849-2451c1fb60f7,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:50:15.338556 kubelet[2704]: I1107 23:50:15.338501 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 7 23:50:15.338980 kubelet[2704]: E1107 23:50:15.338912 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:15.478791 kubelet[2704]: E1107 23:50:15.477878 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:15.478791 kubelet[2704]: E1107 23:50:15.478104 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:15.478959 kubelet[2704]: E1107 23:50:15.478808 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mxd8f" podUID="c9a8041c-b786-4595-b025-c55df53faaff" Nov 7 23:50:15.531360 kubelet[2704]: I1107 23:50:15.531291 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4gm4k" podStartSLOduration=39.531232998 podStartE2EDuration="39.531232998s" podCreationTimestamp="2025-11-07 23:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:50:15.511807252 +0000 UTC m=+47.311668561" watchObservedRunningTime="2025-11-07 23:50:15.531232998 +0000 UTC m=+47.331094307" Nov 7 23:50:15.601594 systemd-networkd[1471]: cali36f812b5c34: Link UP Nov 7 23:50:15.605958 systemd-networkd[1471]: cali36f812b5c34: Gained carrier Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.374 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.475 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0 calico-apiserver-6b4fd87cbb- calico-apiserver 713c65e9-5ca4-4cc6-9849-2451c1fb60f7 837 0 2025-11-07 23:49:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b4fd87cbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b4fd87cbb-lbv5q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36f812b5c34 [] [] }} ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.476 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.523 [INFO][4575] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" HandleID="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Workload="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.523 [INFO][4575] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" HandleID="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Workload="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbb10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b4fd87cbb-lbv5q", "timestamp":"2025-11-07 23:50:15.52336559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.523 [INFO][4575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.523 [INFO][4575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.523 [INFO][4575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.543 [INFO][4575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.551 [INFO][4575] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.557 [INFO][4575] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.560 [INFO][4575] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.564 [INFO][4575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.564 [INFO][4575] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.566 [INFO][4575] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140 Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.571 [INFO][4575] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.587 [INFO][4575] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.588 [INFO][4575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" host="localhost" Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.588 [INFO][4575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:15.631627 containerd[1559]: 2025-11-07 23:50:15.588 [INFO][4575] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" HandleID="k8s-pod-network.797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Workload="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.632198 containerd[1559]: 2025-11-07 23:50:15.591 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0", GenerateName:"calico-apiserver-6b4fd87cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"713c65e9-5ca4-4cc6-9849-2451c1fb60f7", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b4fd87cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b4fd87cbb-lbv5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36f812b5c34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:15.632198 containerd[1559]: 2025-11-07 23:50:15.592 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.632198 containerd[1559]: 2025-11-07 23:50:15.592 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36f812b5c34 ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.632198 containerd[1559]: 2025-11-07 23:50:15.605 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.632198 containerd[1559]: 2025-11-07 23:50:15.607 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0", GenerateName:"calico-apiserver-6b4fd87cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"713c65e9-5ca4-4cc6-9849-2451c1fb60f7", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b4fd87cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140", Pod:"calico-apiserver-6b4fd87cbb-lbv5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36f812b5c34", MAC:"5e:f8:7d:e7:d3:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:15.632198 containerd[1559]: 2025-11-07 23:50:15.629 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-lbv5q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--lbv5q-eth0" Nov 7 23:50:15.664220 containerd[1559]: time="2025-11-07T23:50:15.664086068Z" level=info msg="connecting to shim 797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140" address="unix:///run/containerd/s/b96085d6b093fac96a517012687662b51ab990ba6d341a419c3df4f8c54cafcd" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:15.685835 systemd-networkd[1471]: cali2813ea61fc6: Gained IPv6LL Nov 7 23:50:15.704894 systemd[1]: Started cri-containerd-797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140.scope - libcontainer container 797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140. Nov 7 23:50:15.705331 systemd-networkd[1471]: cali898e06fc3c4: Link UP Nov 7 23:50:15.705553 systemd-networkd[1471]: cali898e06fc3c4: Gained carrier Nov 7 23:50:15.724565 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.355 [INFO][4528] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.474 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0 calico-kube-controllers-564977585f- calico-system c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec 835 0 2025-11-07 23:49:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:564977585f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-564977585f-zfthj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali898e06fc3c4 [] [] }} ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.475 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.537 [INFO][4576] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" HandleID="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Workload="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.538 [INFO][4576] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" HandleID="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Workload="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-564977585f-zfthj", "timestamp":"2025-11-07 23:50:15.537546571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.538 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.588 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.588 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.644 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.660 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.668 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.671 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.678 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.678 [INFO][4576] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.681 [INFO][4576] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4 Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.686 [INFO][4576] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.694 [INFO][4576] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.694 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" host="localhost" Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.694 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:15.725272 containerd[1559]: 2025-11-07 23:50:15.694 [INFO][4576] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" HandleID="k8s-pod-network.adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Workload="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.725795 containerd[1559]: 2025-11-07 23:50:15.699 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0", GenerateName:"calico-kube-controllers-564977585f-", Namespace:"calico-system", SelfLink:"", UID:"c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564977585f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-564977585f-zfthj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali898e06fc3c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:15.725795 containerd[1559]: 2025-11-07 23:50:15.699 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.725795 containerd[1559]: 2025-11-07 23:50:15.699 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali898e06fc3c4 ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.725795 containerd[1559]: 2025-11-07 23:50:15.705 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.725795 containerd[1559]: 2025-11-07 23:50:15.706 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0", GenerateName:"calico-kube-controllers-564977585f-", Namespace:"calico-system", SelfLink:"", UID:"c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564977585f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4", Pod:"calico-kube-controllers-564977585f-zfthj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali898e06fc3c4", MAC:"52:94:c4:37:3f:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:15.725795 containerd[1559]: 2025-11-07 23:50:15.723 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" Namespace="calico-system" Pod="calico-kube-controllers-564977585f-zfthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564977585f--zfthj-eth0" Nov 7 23:50:15.757627 containerd[1559]: time="2025-11-07T23:50:15.757587702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-lbv5q,Uid:713c65e9-5ca4-4cc6-9849-2451c1fb60f7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"797c004f80dc3c7cc036d1639f2303f1c619ffea60eb7d7693bdacf93cd9c140\"" Nov 7 23:50:15.760672 containerd[1559]: time="2025-11-07T23:50:15.760479617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:50:15.767460 containerd[1559]: time="2025-11-07T23:50:15.767358693Z" level=info msg="connecting to shim adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4" address="unix:///run/containerd/s/76e03d3c9fc2cea3ca50b8ba284a645bec5ce7f729d82f2e0b2bac76ffa7add4" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:15.809274 systemd[1]: Started cri-containerd-adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4.scope - libcontainer container adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4. Nov 7 23:50:15.810039 systemd-networkd[1471]: cali7ea92759e63: Link UP Nov 7 23:50:15.811315 systemd-networkd[1471]: cali7ea92759e63: Gained carrier Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.356 [INFO][4539] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.476 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nxrk8-eth0 csi-node-driver- calico-system cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c 734 0 2025-11-07 23:49:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nxrk8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7ea92759e63 [] [] }} ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.477 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.543 [INFO][4578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" HandleID="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Workload="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.543 [INFO][4578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" HandleID="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Workload="localhost-k8s-csi--node--driver--nxrk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004287d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nxrk8", "timestamp":"2025-11-07 23:50:15.543163845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.543 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.694 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.694 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.745 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.754 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.768 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.771 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.774 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.774 [INFO][4578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.777 [INFO][4578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.786 [INFO][4578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.800 [INFO][4578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.800 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" host="localhost" Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.800 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:15.833737 containerd[1559]: 2025-11-07 23:50:15.800 [INFO][4578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" HandleID="k8s-pod-network.278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Workload="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.834289 containerd[1559]: 2025-11-07 23:50:15.806 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxrk8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nxrk8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ea92759e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:15.834289 containerd[1559]: 2025-11-07 23:50:15.806 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.834289 containerd[1559]: 2025-11-07 23:50:15.806 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ea92759e63 ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.834289 containerd[1559]: 2025-11-07 23:50:15.811 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.834289 containerd[1559]: 2025-11-07 23:50:15.811 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxrk8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c", Pod:"csi-node-driver-nxrk8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ea92759e63", MAC:"2a:0a:ef:92:d2:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:15.834289 containerd[1559]: 2025-11-07 23:50:15.827 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" Namespace="calico-system" Pod="csi-node-driver-nxrk8" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxrk8-eth0" Nov 7 23:50:15.838228 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:15.879251 containerd[1559]: time="2025-11-07T23:50:15.878096178Z" level=info msg="connecting to shim 278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c" address="unix:///run/containerd/s/4419f00c62832c4ac3384edd5fb8f9cd14d357677335b0ad759e3353f7bd0d7f" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:15.922076 containerd[1559]: time="2025-11-07T23:50:15.922022959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564977585f-zfthj,Uid:c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec,Namespace:calico-system,Attempt:0,} returns sandbox id \"adbc5da862e17c13cb284712f7c25ee0e6d5560e0e06d1632cd2d996ff1a5fe4\"" Nov 7 23:50:15.927070 systemd[1]: Started cri-containerd-278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c.scope - libcontainer container 278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c. Nov 7 23:50:15.948380 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:15.983933 containerd[1559]: time="2025-11-07T23:50:15.983889289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxrk8,Uid:cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"278682483340b0d8517e90f352e95e1c57c81681e778db6d5dc2d7895636dc4c\"" Nov 7 23:50:16.005593 containerd[1559]: time="2025-11-07T23:50:16.005543851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:16.012697 containerd[1559]: time="2025-11-07T23:50:16.012500171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:50:16.013747 containerd[1559]: time="2025-11-07T23:50:16.013698577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:50:16.013986 kubelet[2704]: E1107 23:50:16.013943 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:16.014036 kubelet[2704]: E1107 23:50:16.013996 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:16.014278 kubelet[2704]: E1107 23:50:16.014203 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b4fd87cbb-lbv5q_calico-apiserver(713c65e9-5ca4-4cc6-9849-2451c1fb60f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:16.014278 kubelet[2704]: E1107 23:50:16.014238 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" podUID="713c65e9-5ca4-4cc6-9849-2451c1fb60f7" Nov 7 23:50:16.015654 containerd[1559]: time="2025-11-07T23:50:16.014833704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 7 23:50:16.035955 systemd-networkd[1471]: vxlan.calico: Link UP Nov 7 23:50:16.035967 systemd-networkd[1471]: vxlan.calico: Gained carrier Nov 7 23:50:16.223756 containerd[1559]: time="2025-11-07T23:50:16.223377818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:16.230688 containerd[1559]: time="2025-11-07T23:50:16.230587091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 7 23:50:16.231598 containerd[1559]: time="2025-11-07T23:50:16.230659609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 7 23:50:16.231680 kubelet[2704]: E1107 23:50:16.230922 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:50:16.231680 kubelet[2704]: E1107 23:50:16.230973 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:50:16.231680 kubelet[2704]: E1107 23:50:16.231140 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-564977585f-zfthj_calico-system(c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:16.231680 kubelet[2704]: E1107 23:50:16.231176 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564977585f-zfthj" podUID="c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec" Nov 7 23:50:16.231994 containerd[1559]: time="2025-11-07T23:50:16.231620701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 7 23:50:16.323674 containerd[1559]: time="2025-11-07T23:50:16.323137266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-4qgj5,Uid:7c146669-da8e-492d-867a-402ab7ddcdae,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:50:16.324571 kubelet[2704]: E1107 23:50:16.324521 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:16.335237 containerd[1559]: time="2025-11-07T23:50:16.335133600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mljdd,Uid:7ebc1790-987e-4353-8d75-b2b09f07e98c,Namespace:kube-system,Attempt:0,}" Nov 7 23:50:16.433410 containerd[1559]: time="2025-11-07T23:50:16.433355452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:16.434359 containerd[1559]: time="2025-11-07T23:50:16.434319344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 7 23:50:16.434448 containerd[1559]: time="2025-11-07T23:50:16.434390302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 7 23:50:16.434843 kubelet[2704]: E1107 23:50:16.434801 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:50:16.436605 kubelet[2704]: E1107 23:50:16.434857 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:50:16.436605 kubelet[2704]: E1107 23:50:16.434947 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nxrk8_calico-system(cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:16.437087 containerd[1559]: time="2025-11-07T23:50:16.437060185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 7 23:50:16.449831 systemd-networkd[1471]: calie3d162d72ef: Link UP Nov 7 23:50:16.451014 systemd-networkd[1471]: calie3d162d72ef: Gained carrier Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.378 [INFO][4903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--mljdd-eth0 coredns-66bc5c9577- kube-system 7ebc1790-987e-4353-8d75-b2b09f07e98c 832 0 2025-11-07 23:49:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-mljdd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie3d162d72ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.378 [INFO][4903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.402 [INFO][4922] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" HandleID="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Workload="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.402 [INFO][4922] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" HandleID="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Workload="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-mljdd", "timestamp":"2025-11-07 23:50:16.402136351 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.402 [INFO][4922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.402 [INFO][4922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.402 [INFO][4922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.412 [INFO][4922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.417 [INFO][4922] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.421 [INFO][4922] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.424 [INFO][4922] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.427 [INFO][4922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.427 [INFO][4922] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.429 [INFO][4922] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.433 [INFO][4922] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.441 [INFO][4922] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.441 [INFO][4922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" host="localhost" Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.441 [INFO][4922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:16.464865 containerd[1559]: 2025-11-07 23:50:16.442 [INFO][4922] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" HandleID="k8s-pod-network.255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Workload="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.465438 containerd[1559]: 2025-11-07 23:50:16.447 [INFO][4903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mljdd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7ebc1790-987e-4353-8d75-b2b09f07e98c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-mljdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3d162d72ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:16.465438 containerd[1559]: 2025-11-07 23:50:16.448 [INFO][4903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.465438 containerd[1559]: 2025-11-07 23:50:16.448 [INFO][4903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3d162d72ef ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.465438 containerd[1559]: 2025-11-07 23:50:16.449 [INFO][4903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.465438 containerd[1559]: 2025-11-07 23:50:16.449 [INFO][4903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mljdd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7ebc1790-987e-4353-8d75-b2b09f07e98c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d", Pod:"coredns-66bc5c9577-mljdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3d162d72ef", MAC:"32:37:f7:cf:3d:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:16.465438 containerd[1559]: 2025-11-07 23:50:16.460 [INFO][4903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" Namespace="kube-system" Pod="coredns-66bc5c9577-mljdd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mljdd-eth0" Nov 7 23:50:16.485023 kubelet[2704]: E1107 23:50:16.484857 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" podUID="713c65e9-5ca4-4cc6-9849-2451c1fb60f7" Nov 7 23:50:16.489843 kubelet[2704]: E1107 23:50:16.489794 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564977585f-zfthj" podUID="c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec" Nov 7 23:50:16.491511 kubelet[2704]: E1107 23:50:16.491487 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:16.513900 containerd[1559]: time="2025-11-07T23:50:16.513815135Z" level=info msg="connecting to shim 255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d" address="unix:///run/containerd/s/908218c4cbfb920e3271d192b53f2ab66fb9c1a4be47f1a4d2949f49a053bb80" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:16.575915 systemd[1]: Started cri-containerd-255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d.scope - libcontainer container 255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d. Nov 7 23:50:16.583315 systemd-networkd[1471]: cali2e9a16f546e: Link UP Nov 7 23:50:16.584681 systemd-networkd[1471]: cali2e9a16f546e: Gained carrier Nov 7 23:50:16.592855 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.375 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0 calico-apiserver-6b4fd87cbb- calico-apiserver 7c146669-da8e-492d-867a-402ab7ddcdae 836 0 2025-11-07 23:49:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b4fd87cbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b4fd87cbb-4qgj5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e9a16f546e [] [] }} ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.375 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.410 [INFO][4920] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" HandleID="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Workload="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.411 [INFO][4920] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" HandleID="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Workload="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b4fd87cbb-4qgj5", "timestamp":"2025-11-07 23:50:16.410937057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.411 [INFO][4920] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.441 [INFO][4920] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.441 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.513 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.533 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.544 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.550 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.554 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.554 [INFO][4920] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.557 [INFO][4920] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116 Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.564 [INFO][4920] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.575 [INFO][4920] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.575 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" host="localhost" Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.575 [INFO][4920] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:50:16.605616 containerd[1559]: 2025-11-07 23:50:16.575 [INFO][4920] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" HandleID="k8s-pod-network.ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Workload="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.606971 containerd[1559]: 2025-11-07 23:50:16.578 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0", GenerateName:"calico-apiserver-6b4fd87cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c146669-da8e-492d-867a-402ab7ddcdae", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b4fd87cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b4fd87cbb-4qgj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e9a16f546e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:16.606971 containerd[1559]: 2025-11-07 23:50:16.578 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.606971 containerd[1559]: 2025-11-07 23:50:16.578 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e9a16f546e ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.606971 containerd[1559]: 2025-11-07 23:50:16.585 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.606971 containerd[1559]: 2025-11-07 23:50:16.592 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0", GenerateName:"calico-apiserver-6b4fd87cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c146669-da8e-492d-867a-402ab7ddcdae", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b4fd87cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116", Pod:"calico-apiserver-6b4fd87cbb-4qgj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e9a16f546e", MAC:"ce:23:3e:bf:a3:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:50:16.606971 containerd[1559]: 2025-11-07 23:50:16.601 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" Namespace="calico-apiserver" Pod="calico-apiserver-6b4fd87cbb-4qgj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b4fd87cbb--4qgj5-eth0" Nov 7 23:50:16.621901 containerd[1559]: time="2025-11-07T23:50:16.621847064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mljdd,Uid:7ebc1790-987e-4353-8d75-b2b09f07e98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d\"" Nov 7 23:50:16.623381 kubelet[2704]: E1107 23:50:16.623349 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:16.629053 containerd[1559]: time="2025-11-07T23:50:16.628840542Z" level=info msg="CreateContainer within sandbox \"255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 7 23:50:16.631554 containerd[1559]: time="2025-11-07T23:50:16.631291032Z" level=info msg="connecting to shim ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116" address="unix:///run/containerd/s/19ea645614dccd91e93885cdb8c1258a7354d5bdff743e8e17e38a88d38d6bd7" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:50:16.644081 containerd[1559]: time="2025-11-07T23:50:16.644026025Z" level=info msg="Container edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:50:16.651498 containerd[1559]: time="2025-11-07T23:50:16.651423412Z" level=info msg="CreateContainer within sandbox \"255d007a64763d798905ae23e4f6d7d20b2fe98fd5eb6b5d7f872bb711a0e60d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3\"" Nov 7 23:50:16.652103 containerd[1559]: time="2025-11-07T23:50:16.652059874Z" level=info msg="StartContainer for \"edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3\"" Nov 7 23:50:16.653247 containerd[1559]: time="2025-11-07T23:50:16.653190521Z" level=info msg="connecting to shim edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3" address="unix:///run/containerd/s/908218c4cbfb920e3271d192b53f2ab66fb9c1a4be47f1a4d2949f49a053bb80" protocol=ttrpc version=3 Nov 7 23:50:16.654845 systemd[1]: Started cri-containerd-ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116.scope - libcontainer container ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116. Nov 7 23:50:16.669128 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:50:16.672817 systemd[1]: Started cri-containerd-edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3.scope - libcontainer container edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3. Nov 7 23:50:16.672975 containerd[1559]: time="2025-11-07T23:50:16.672861635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:16.673980 containerd[1559]: time="2025-11-07T23:50:16.673875485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 7 23:50:16.674071 containerd[1559]: time="2025-11-07T23:50:16.674046241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 7 23:50:16.674561 kubelet[2704]: E1107 23:50:16.674516 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:50:16.674718 kubelet[2704]: E1107 23:50:16.674573 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:50:16.674718 kubelet[2704]: E1107 23:50:16.674685 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nxrk8_calico-system(cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:16.674964 kubelet[2704]: E1107 23:50:16.674724 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:50:16.704957 containerd[1559]: time="2025-11-07T23:50:16.704914632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b4fd87cbb-4qgj5,Uid:7c146669-da8e-492d-867a-402ab7ddcdae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ddf2969de7a53a9c7b2ac81a81bf52d128cf0bd39f7848a3e7c247b49f76d116\"" Nov 7 23:50:16.707911 containerd[1559]: time="2025-11-07T23:50:16.707123288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:50:16.716587 containerd[1559]: time="2025-11-07T23:50:16.716552456Z" level=info msg="StartContainer for \"edd95cea0d39188017b362119e4067275f9dade57e936c53e39da47f91152ca3\" returns successfully" Nov 7 23:50:16.838174 systemd-networkd[1471]: cali36f812b5c34: Gained IPv6LL Nov 7 23:50:16.914063 containerd[1559]: time="2025-11-07T23:50:16.914004850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:16.915388 containerd[1559]: time="2025-11-07T23:50:16.915047900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:50:16.915388 containerd[1559]: time="2025-11-07T23:50:16.915148258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:50:16.915541 kubelet[2704]: E1107 23:50:16.915345 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:16.915541 kubelet[2704]: E1107 23:50:16.915393 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:16.915541 kubelet[2704]: E1107 23:50:16.915477 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b4fd87cbb-4qgj5_calico-apiserver(7c146669-da8e-492d-867a-402ab7ddcdae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:16.915541 kubelet[2704]: E1107 23:50:16.915512 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" podUID="7c146669-da8e-492d-867a-402ab7ddcdae" Nov 7 23:50:17.093908 systemd-networkd[1471]: cali898e06fc3c4: Gained IPv6LL Nov 7 23:50:17.413815 systemd-networkd[1471]: vxlan.calico: Gained IPv6LL Nov 7 23:50:17.493753 kubelet[2704]: E1107 23:50:17.493559 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" podUID="7c146669-da8e-492d-867a-402ab7ddcdae" Nov 7 23:50:17.496318 kubelet[2704]: E1107 23:50:17.496239 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:17.497306 kubelet[2704]: E1107 23:50:17.496620 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:17.497715 kubelet[2704]: E1107 23:50:17.497674 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564977585f-zfthj" podUID="c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec" Nov 7 23:50:17.498049 kubelet[2704]: E1107 23:50:17.497729 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" podUID="713c65e9-5ca4-4cc6-9849-2451c1fb60f7" Nov 7 23:50:17.498049 kubelet[2704]: E1107 23:50:17.497943 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:50:17.580706 kubelet[2704]: I1107 23:50:17.580581 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mljdd" podStartSLOduration=41.580550454 podStartE2EDuration="41.580550454s" podCreationTimestamp="2025-11-07 23:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:50:17.579182893 +0000 UTC m=+49.379044242" watchObservedRunningTime="2025-11-07 23:50:17.580550454 +0000 UTC m=+49.380411763" Nov 7 23:50:17.605840 systemd-networkd[1471]: cali7ea92759e63: Gained IPv6LL Nov 7 23:50:17.861786 systemd-networkd[1471]: cali2e9a16f546e: Gained IPv6LL Nov 7 23:50:18.501280 kubelet[2704]: E1107 23:50:18.501230 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:18.501834 kubelet[2704]: E1107 23:50:18.501792 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" podUID="7c146669-da8e-492d-867a-402ab7ddcdae" Nov 7 23:50:18.502869 systemd-networkd[1471]: calie3d162d72ef: Gained IPv6LL Nov 7 23:50:18.774262 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:42530.service - OpenSSH per-connection server daemon (10.0.0.1:42530). Nov 7 23:50:18.850547 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 42530 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:18.852244 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:18.856324 systemd-logind[1543]: New session 10 of user core. Nov 7 23:50:18.866786 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 7 23:50:19.052422 sshd[5097]: Connection closed by 10.0.0.1 port 42530 Nov 7 23:50:19.053552 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:19.064721 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:42530.service: Deactivated successfully. Nov 7 23:50:19.067539 systemd[1]: session-10.scope: Deactivated successfully. Nov 7 23:50:19.069208 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Nov 7 23:50:19.072353 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:42534.service - OpenSSH per-connection server daemon (10.0.0.1:42534). Nov 7 23:50:19.073328 systemd-logind[1543]: Removed session 10. Nov 7 23:50:19.121955 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:19.123218 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:19.127808 systemd-logind[1543]: New session 11 of user core. Nov 7 23:50:19.132800 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 7 23:50:19.318898 sshd[5117]: Connection closed by 10.0.0.1 port 42534 Nov 7 23:50:19.319834 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:19.336336 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:42534.service: Deactivated successfully. Nov 7 23:50:19.341907 systemd[1]: session-11.scope: Deactivated successfully. Nov 7 23:50:19.343823 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Nov 7 23:50:19.345758 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:52416.service - OpenSSH per-connection server daemon (10.0.0.1:52416). Nov 7 23:50:19.348828 systemd-logind[1543]: Removed session 11. Nov 7 23:50:19.404402 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 52416 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:19.405621 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:19.411297 systemd-logind[1543]: New session 12 of user core. Nov 7 23:50:19.420842 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 7 23:50:19.502393 kubelet[2704]: E1107 23:50:19.502362 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:19.561282 sshd[5131]: Connection closed by 10.0.0.1 port 52416 Nov 7 23:50:19.562254 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:19.567009 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:52416.service: Deactivated successfully. Nov 7 23:50:19.568880 systemd[1]: session-12.scope: Deactivated successfully. Nov 7 23:50:19.570148 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Nov 7 23:50:19.572712 systemd-logind[1543]: Removed session 12. Nov 7 23:50:20.309962 containerd[1559]: time="2025-11-07T23:50:20.309628045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 7 23:50:20.509971 kubelet[2704]: E1107 23:50:20.509943 2704 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:50:20.522131 containerd[1559]: time="2025-11-07T23:50:20.522078700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:20.539391 containerd[1559]: time="2025-11-07T23:50:20.539326334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 7 23:50:20.539532 containerd[1559]: time="2025-11-07T23:50:20.539427131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 7 23:50:20.539616 kubelet[2704]: E1107 23:50:20.539547 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:50:20.539616 kubelet[2704]: E1107 23:50:20.539589 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:50:20.539907 kubelet[2704]: E1107 23:50:20.539701 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b67dc9485-t4fjt_calico-system(04fb3d41-4a90-455a-820d-9b24bda7bc24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:20.548261 containerd[1559]: time="2025-11-07T23:50:20.548220583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 7 23:50:20.767837 containerd[1559]: time="2025-11-07T23:50:20.767790094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:20.768700 containerd[1559]: time="2025-11-07T23:50:20.768661192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 7 23:50:20.768895 containerd[1559]: time="2025-11-07T23:50:20.768731310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 7 23:50:20.768951 kubelet[2704]: E1107 23:50:20.768908 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:50:20.768997 kubelet[2704]: E1107 23:50:20.768960 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:50:20.769098 kubelet[2704]: E1107 23:50:20.769062 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b67dc9485-t4fjt_calico-system(04fb3d41-4a90-455a-820d-9b24bda7bc24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:20.769137 kubelet[2704]: E1107 23:50:20.769106 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b67dc9485-t4fjt" podUID="04fb3d41-4a90-455a-820d-9b24bda7bc24" Nov 7 23:50:24.575295 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:52432.service - OpenSSH per-connection server daemon (10.0.0.1:52432). Nov 7 23:50:24.632351 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 52432 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:24.634012 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:24.646389 systemd-logind[1543]: New session 13 of user core. Nov 7 23:50:24.654437 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 7 23:50:24.801800 sshd[5160]: Connection closed by 10.0.0.1 port 52432 Nov 7 23:50:24.801621 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:24.811336 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:52432.service: Deactivated successfully. Nov 7 23:50:24.813222 systemd[1]: session-13.scope: Deactivated successfully. Nov 7 23:50:24.813878 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Nov 7 23:50:24.816860 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). Nov 7 23:50:24.817534 systemd-logind[1543]: Removed session 13. Nov 7 23:50:24.873561 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:24.875010 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:24.878996 systemd-logind[1543]: New session 14 of user core. Nov 7 23:50:24.888959 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 7 23:50:25.089793 sshd[5176]: Connection closed by 10.0.0.1 port 52448 Nov 7 23:50:25.090292 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:25.108118 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:52448.service: Deactivated successfully. Nov 7 23:50:25.110876 systemd[1]: session-14.scope: Deactivated successfully. Nov 7 23:50:25.112331 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Nov 7 23:50:25.114792 systemd-logind[1543]: Removed session 14. Nov 7 23:50:25.116726 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:52454.service - OpenSSH per-connection server daemon (10.0.0.1:52454). Nov 7 23:50:25.171309 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 52454 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:25.172603 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:25.176919 systemd-logind[1543]: New session 15 of user core. Nov 7 23:50:25.185031 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 7 23:50:26.090298 sshd[5191]: Connection closed by 10.0.0.1 port 52454 Nov 7 23:50:26.090748 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:26.100525 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:52454.service: Deactivated successfully. Nov 7 23:50:26.102985 systemd[1]: session-15.scope: Deactivated successfully. Nov 7 23:50:26.104243 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Nov 7 23:50:26.107495 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:52460.service - OpenSSH per-connection server daemon (10.0.0.1:52460). Nov 7 23:50:26.109220 systemd-logind[1543]: Removed session 15. Nov 7 23:50:26.165360 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 52460 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:26.167338 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:26.171729 systemd-logind[1543]: New session 16 of user core. Nov 7 23:50:26.178817 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 7 23:50:26.310096 containerd[1559]: time="2025-11-07T23:50:26.309554571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 7 23:50:26.508578 sshd[5214]: Connection closed by 10.0.0.1 port 52460 Nov 7 23:50:26.510006 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:26.530007 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:52460.service: Deactivated successfully. Nov 7 23:50:26.531846 systemd[1]: session-16.scope: Deactivated successfully. Nov 7 23:50:26.535926 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Nov 7 23:50:26.542964 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:52464.service - OpenSSH per-connection server daemon (10.0.0.1:52464). Nov 7 23:50:26.543817 systemd-logind[1543]: Removed session 16. Nov 7 23:50:26.589841 containerd[1559]: time="2025-11-07T23:50:26.589796508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:26.591665 containerd[1559]: time="2025-11-07T23:50:26.591128438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 7 23:50:26.591665 containerd[1559]: time="2025-11-07T23:50:26.591196797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 7 23:50:26.591778 kubelet[2704]: E1107 23:50:26.591352 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:50:26.591778 kubelet[2704]: E1107 23:50:26.591393 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:50:26.591778 kubelet[2704]: E1107 23:50:26.591468 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mxd8f_calico-system(c9a8041c-b786-4595-b025-c55df53faaff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:26.591778 kubelet[2704]: E1107 23:50:26.591496 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mxd8f" podUID="c9a8041c-b786-4595-b025-c55df53faaff" Nov 7 23:50:26.613767 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 52464 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:26.615001 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:26.621605 systemd-logind[1543]: New session 17 of user core. Nov 7 23:50:26.629143 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 7 23:50:26.755391 sshd[5235]: Connection closed by 10.0.0.1 port 52464 Nov 7 23:50:26.755161 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:26.759149 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:52464.service: Deactivated successfully. Nov 7 23:50:26.761206 systemd[1]: session-17.scope: Deactivated successfully. Nov 7 23:50:26.762772 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Nov 7 23:50:26.763834 systemd-logind[1543]: Removed session 17. Nov 7 23:50:29.309673 containerd[1559]: time="2025-11-07T23:50:29.309144367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 7 23:50:29.523693 containerd[1559]: time="2025-11-07T23:50:29.523611947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:29.524679 containerd[1559]: time="2025-11-07T23:50:29.524626846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 7 23:50:29.524866 containerd[1559]: time="2025-11-07T23:50:29.524666086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 7 23:50:29.524945 kubelet[2704]: E1107 23:50:29.524902 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:50:29.525203 kubelet[2704]: E1107 23:50:29.524954 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:50:29.525203 kubelet[2704]: E1107 23:50:29.525035 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nxrk8_calico-system(cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:29.525969 containerd[1559]: time="2025-11-07T23:50:29.525941219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 7 23:50:29.756331 containerd[1559]: time="2025-11-07T23:50:29.756121476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:29.757615 containerd[1559]: time="2025-11-07T23:50:29.757541527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 7 23:50:29.757691 containerd[1559]: time="2025-11-07T23:50:29.757622045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 7 23:50:29.758336 kubelet[2704]: E1107 23:50:29.757828 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:50:29.758336 kubelet[2704]: E1107 23:50:29.757892 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:50:29.758336 kubelet[2704]: E1107 23:50:29.757961 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nxrk8_calico-system(cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:29.758522 kubelet[2704]: E1107 23:50:29.758001 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nxrk8" podUID="cd5bfb52-b349-4f3d-ac38-78d6f47e1f8c" Nov 7 23:50:30.309531 containerd[1559]: time="2025-11-07T23:50:30.309367471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:50:30.516117 containerd[1559]: time="2025-11-07T23:50:30.516062155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:30.518053 containerd[1559]: time="2025-11-07T23:50:30.517988796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:50:30.518141 containerd[1559]: time="2025-11-07T23:50:30.518082355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:50:30.518307 kubelet[2704]: E1107 23:50:30.518246 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:30.518307 kubelet[2704]: E1107 23:50:30.518304 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:30.518521 kubelet[2704]: E1107 23:50:30.518381 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b4fd87cbb-4qgj5_calico-apiserver(7c146669-da8e-492d-867a-402ab7ddcdae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:30.518521 kubelet[2704]: E1107 23:50:30.518417 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-4qgj5" podUID="7c146669-da8e-492d-867a-402ab7ddcdae" Nov 7 23:50:31.309947 containerd[1559]: time="2025-11-07T23:50:31.309903225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 7 23:50:31.540807 containerd[1559]: time="2025-11-07T23:50:31.540619899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:31.541850 containerd[1559]: time="2025-11-07T23:50:31.541727157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 7 23:50:31.541850 containerd[1559]: time="2025-11-07T23:50:31.541745477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 7 23:50:31.541948 kubelet[2704]: E1107 23:50:31.541912 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:50:31.542149 kubelet[2704]: E1107 23:50:31.541953 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:50:31.542149 kubelet[2704]: E1107 23:50:31.542029 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-564977585f-zfthj_calico-system(c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:31.542149 kubelet[2704]: E1107 23:50:31.542061 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564977585f-zfthj" podUID="c6bfa01b-1c8c-4494-9a54-e48d2a2c5cec" Nov 7 23:50:31.769876 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:57000.service - OpenSSH per-connection server daemon (10.0.0.1:57000). Nov 7 23:50:31.826413 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 57000 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:31.828109 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:31.834694 systemd-logind[1543]: New session 18 of user core. Nov 7 23:50:31.849867 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 7 23:50:31.968510 sshd[5259]: Connection closed by 10.0.0.1 port 57000 Nov 7 23:50:31.968873 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:31.973676 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:57000.service: Deactivated successfully. Nov 7 23:50:31.975710 systemd[1]: session-18.scope: Deactivated successfully. Nov 7 23:50:31.976609 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Nov 7 23:50:31.977809 systemd-logind[1543]: Removed session 18. Nov 7 23:50:32.310027 containerd[1559]: time="2025-11-07T23:50:32.309869032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:50:32.524007 containerd[1559]: time="2025-11-07T23:50:32.523947172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:50:32.525024 containerd[1559]: time="2025-11-07T23:50:32.524972513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:50:32.525068 containerd[1559]: time="2025-11-07T23:50:32.525026152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:50:32.525293 kubelet[2704]: E1107 23:50:32.525249 2704 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:32.525344 kubelet[2704]: E1107 23:50:32.525305 2704 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:50:32.525399 kubelet[2704]: E1107 23:50:32.525379 2704 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b4fd87cbb-lbv5q_calico-apiserver(713c65e9-5ca4-4cc6-9849-2451c1fb60f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:50:32.525458 kubelet[2704]: E1107 23:50:32.525414 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b4fd87cbb-lbv5q" podUID="713c65e9-5ca4-4cc6-9849-2451c1fb60f7" Nov 7 23:50:35.311003 kubelet[2704]: E1107 23:50:35.310888 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b67dc9485-t4fjt" podUID="04fb3d41-4a90-455a-820d-9b24bda7bc24" Nov 7 23:50:36.984006 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:57014.service - OpenSSH per-connection server daemon (10.0.0.1:57014). Nov 7 23:50:37.045135 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 57014 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:50:37.046470 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:50:37.051126 systemd-logind[1543]: New session 19 of user core. Nov 7 23:50:37.060859 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 7 23:50:37.183760 sshd[5284]: Connection closed by 10.0.0.1 port 57014 Nov 7 23:50:37.184190 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Nov 7 23:50:37.188225 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:57014.service: Deactivated successfully. Nov 7 23:50:37.190091 systemd[1]: session-19.scope: Deactivated successfully. Nov 7 23:50:37.190999 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Nov 7 23:50:37.192090 systemd-logind[1543]: Removed session 19. Nov 7 23:50:37.309185 kubelet[2704]: E1107 23:50:37.308953 2704 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mxd8f" podUID="c9a8041c-b786-4595-b025-c55df53faaff"