Nov 5 14:56:54.326188 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 5 14:56:54.326214 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Nov 5 13:42:06 -00 2025 Nov 5 14:56:54.326224 kernel: KASLR enabled Nov 5 14:56:54.326230 kernel: efi: EFI v2.7 by EDK II Nov 5 14:56:54.326236 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Nov 5 14:56:54.326241 kernel: random: crng init done Nov 5 14:56:54.326248 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 5 14:56:54.326255 kernel: secureboot: Secure boot enabled Nov 5 14:56:54.326263 kernel: ACPI: Early table checksum verification disabled Nov 5 14:56:54.326270 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Nov 5 14:56:54.326276 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 5 14:56:54.326283 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326289 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326296 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326305 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326315 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326322 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326329 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326336 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326346 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:56:54.326355 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 5 14:56:54.326361 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 14:56:54.326370 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:56:54.326376 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Nov 5 14:56:54.326383 kernel: Zone ranges: Nov 5 14:56:54.326390 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:56:54.326397 kernel: DMA32 empty Nov 5 14:56:54.326403 kernel: Normal empty Nov 5 14:56:54.326411 kernel: Device empty Nov 5 14:56:54.326418 kernel: Movable zone start for each node Nov 5 14:56:54.326424 kernel: Early memory node ranges Nov 5 14:56:54.326431 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Nov 5 14:56:54.326437 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Nov 5 14:56:54.326444 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Nov 5 14:56:54.326452 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Nov 5 14:56:54.326459 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Nov 5 14:56:54.326467 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Nov 5 14:56:54.326475 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Nov 5 14:56:54.326482 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Nov 5 14:56:54.326489 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 5 14:56:54.326500 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:56:54.326507 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 5 14:56:54.326521 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Nov 5 14:56:54.326528 kernel: psci: probing for conduit method from ACPI. Nov 5 14:56:54.326535 kernel: psci: PSCIv1.1 detected in firmware. Nov 5 14:56:54.326543 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 14:56:54.326550 kernel: psci: Trusted OS migration not required Nov 5 14:56:54.326557 kernel: psci: SMC Calling Convention v1.1 Nov 5 14:56:54.326566 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 5 14:56:54.326573 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 14:56:54.326588 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 14:56:54.326595 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 5 14:56:54.326602 kernel: Detected PIPT I-cache on CPU0 Nov 5 14:56:54.326610 kernel: CPU features: detected: GIC system register CPU interface Nov 5 14:56:54.326617 kernel: CPU features: detected: Spectre-v4 Nov 5 14:56:54.326623 kernel: CPU features: detected: Spectre-BHB Nov 5 14:56:54.326630 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 14:56:54.326637 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 14:56:54.326644 kernel: CPU features: detected: ARM erratum 1418040 Nov 5 14:56:54.326653 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 14:56:54.326659 kernel: alternatives: applying boot alternatives Nov 5 14:56:54.326667 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 14:56:54.326674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 14:56:54.326681 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 14:56:54.326688 kernel: Fallback order for Node 0: 0 Nov 5 14:56:54.326695 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 5 14:56:54.326702 kernel: Policy zone: DMA Nov 5 14:56:54.326709 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 14:56:54.326717 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 5 14:56:54.326725 kernel: software IO TLB: area num 4. Nov 5 14:56:54.326732 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 5 14:56:54.326739 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Nov 5 14:56:54.326746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 14:56:54.326753 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 14:56:54.326761 kernel: rcu: RCU event tracing is enabled. Nov 5 14:56:54.326768 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 14:56:54.326775 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 14:56:54.326782 kernel: Tracing variant of Tasks RCU enabled. Nov 5 14:56:54.326789 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 14:56:54.326796 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 14:56:54.326803 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 14:56:54.326811 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 14:56:54.326818 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 14:56:54.326825 kernel: GICv3: 256 SPIs implemented Nov 5 14:56:54.326832 kernel: GICv3: 0 Extended SPIs implemented Nov 5 14:56:54.326839 kernel: Root IRQ handler: gic_handle_irq Nov 5 14:56:54.326845 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 5 14:56:54.326852 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 14:56:54.326859 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 5 14:56:54.326866 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 5 14:56:54.326873 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 5 14:56:54.326880 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 5 14:56:54.326889 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 5 14:56:54.326896 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 5 14:56:54.326902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 14:56:54.326909 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:56:54.326916 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 5 14:56:54.326923 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 5 14:56:54.326930 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 5 14:56:54.326937 kernel: arm-pv: using stolen time PV Nov 5 14:56:54.326945 kernel: Console: colour dummy device 80x25 Nov 5 14:56:54.326953 kernel: ACPI: Core revision 20240827 Nov 5 14:56:54.326961 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 5 14:56:54.326968 kernel: pid_max: default: 32768 minimum: 301 Nov 5 14:56:54.326975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 14:56:54.326983 kernel: landlock: Up and running. Nov 5 14:56:54.326990 kernel: SELinux: Initializing. Nov 5 14:56:54.327058 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 14:56:54.327066 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 14:56:54.327076 kernel: rcu: Hierarchical SRCU implementation. Nov 5 14:56:54.327084 kernel: rcu: Max phase no-delay instances is 400. Nov 5 14:56:54.327091 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 14:56:54.327099 kernel: Remapping and enabling EFI services. Nov 5 14:56:54.327106 kernel: smp: Bringing up secondary CPUs ... Nov 5 14:56:54.327113 kernel: Detected PIPT I-cache on CPU1 Nov 5 14:56:54.327120 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 5 14:56:54.327128 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 5 14:56:54.327136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:56:54.327149 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 5 14:56:54.327156 kernel: Detected PIPT I-cache on CPU2 Nov 5 14:56:54.327164 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 5 14:56:54.327180 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 5 14:56:54.327188 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:56:54.327195 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 5 14:56:54.327203 kernel: Detected PIPT I-cache on CPU3 Nov 5 14:56:54.327213 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 5 14:56:54.327220 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 5 14:56:54.327228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:56:54.327235 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 5 14:56:54.327243 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 14:56:54.327251 kernel: SMP: Total of 4 processors activated. Nov 5 14:56:54.327259 kernel: CPU: All CPU(s) started at EL1 Nov 5 14:56:54.327266 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 14:56:54.327274 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 14:56:54.327281 kernel: CPU features: detected: Common not Private translations Nov 5 14:56:54.327289 kernel: CPU features: detected: CRC32 instructions Nov 5 14:56:54.327296 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 5 14:56:54.327305 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 14:56:54.327313 kernel: CPU features: detected: LSE atomic instructions Nov 5 14:56:54.327320 kernel: CPU features: detected: Privileged Access Never Nov 5 14:56:54.327328 kernel: CPU features: detected: RAS Extension Support Nov 5 14:56:54.327335 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 14:56:54.327343 kernel: alternatives: applying system-wide alternatives Nov 5 14:56:54.327350 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 5 14:56:54.327359 kernel: Memory: 2448292K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 101660K reserved, 16384K cma-reserved) Nov 5 14:56:54.327368 kernel: devtmpfs: initialized Nov 5 14:56:54.327375 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 14:56:54.327383 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 14:56:54.327390 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 14:56:54.327398 kernel: 0 pages in range for non-PLT usage Nov 5 14:56:54.327405 kernel: 515056 pages in range for PLT usage Nov 5 14:56:54.327412 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 14:56:54.327422 kernel: SMBIOS 3.0.0 present. Nov 5 14:56:54.327429 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 5 14:56:54.327437 kernel: DMI: Memory slots populated: 1/1 Nov 5 14:56:54.327444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 14:56:54.327452 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 14:56:54.327460 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 14:56:54.327467 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 14:56:54.327476 kernel: audit: initializing netlink subsys (disabled) Nov 5 14:56:54.327484 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Nov 5 14:56:54.327492 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 14:56:54.327499 kernel: cpuidle: using governor menu Nov 5 14:56:54.327507 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 14:56:54.327514 kernel: ASID allocator initialised with 32768 entries Nov 5 14:56:54.327522 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 14:56:54.327531 kernel: Serial: AMBA PL011 UART driver Nov 5 14:56:54.327538 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 14:56:54.327546 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 14:56:54.327553 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 14:56:54.327561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 14:56:54.327568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 14:56:54.327589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 14:56:54.327598 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 14:56:54.327608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 14:56:54.327615 kernel: ACPI: Added _OSI(Module Device) Nov 5 14:56:54.327623 kernel: ACPI: Added _OSI(Processor Device) Nov 5 14:56:54.327631 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 14:56:54.327639 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 14:56:54.327646 kernel: ACPI: Interpreter enabled Nov 5 14:56:54.327654 kernel: ACPI: Using GIC for interrupt routing Nov 5 14:56:54.327663 kernel: ACPI: MCFG table detected, 1 entries Nov 5 14:56:54.327671 kernel: ACPI: CPU0 has been hot-added Nov 5 14:56:54.327678 kernel: ACPI: CPU1 has been hot-added Nov 5 14:56:54.327686 kernel: ACPI: CPU2 has been hot-added Nov 5 14:56:54.327693 kernel: ACPI: CPU3 has been hot-added Nov 5 14:56:54.327701 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 5 14:56:54.327709 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 14:56:54.327716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 14:56:54.327879 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 14:56:54.327965 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 14:56:54.328046 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 14:56:54.328126 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 5 14:56:54.328218 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 5 14:56:54.328233 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 5 14:56:54.328241 kernel: PCI host bridge to bus 0000:00 Nov 5 14:56:54.328327 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 5 14:56:54.328401 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 14:56:54.328472 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 5 14:56:54.328544 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 14:56:54.328650 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 5 14:56:54.328749 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 14:56:54.328836 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 5 14:56:54.328916 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 5 14:56:54.328998 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 5 14:56:54.329081 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 5 14:56:54.329162 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 5 14:56:54.329253 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 5 14:56:54.329328 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 5 14:56:54.329403 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 14:56:54.329474 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 5 14:56:54.329486 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 14:56:54.329494 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 14:56:54.329502 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 14:56:54.329509 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 14:56:54.329517 kernel: iommu: Default domain type: Translated Nov 5 14:56:54.329525 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 14:56:54.329532 kernel: efivars: Registered efivars operations Nov 5 14:56:54.329542 kernel: vgaarb: loaded Nov 5 14:56:54.329549 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 14:56:54.329557 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 14:56:54.329565 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 14:56:54.329572 kernel: pnp: PnP ACPI init Nov 5 14:56:54.329677 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 5 14:56:54.329689 kernel: pnp: PnP ACPI: found 1 devices Nov 5 14:56:54.329699 kernel: NET: Registered PF_INET protocol family Nov 5 14:56:54.329707 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 14:56:54.329715 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 14:56:54.329722 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 14:56:54.329730 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 14:56:54.329738 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 14:56:54.329746 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 14:56:54.329755 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 14:56:54.329763 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 14:56:54.329770 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 14:56:54.329778 kernel: PCI: CLS 0 bytes, default 64 Nov 5 14:56:54.329785 kernel: kvm [1]: HYP mode not available Nov 5 14:56:54.329793 kernel: Initialise system trusted keyrings Nov 5 14:56:54.329801 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 14:56:54.329809 kernel: Key type asymmetric registered Nov 5 14:56:54.329817 kernel: Asymmetric key parser 'x509' registered Nov 5 14:56:54.329824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 14:56:54.329831 kernel: io scheduler mq-deadline registered Nov 5 14:56:54.329839 kernel: io scheduler kyber registered Nov 5 14:56:54.329846 kernel: io scheduler bfq registered Nov 5 14:56:54.329854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 14:56:54.329863 kernel: ACPI: button: Power Button [PWRB] Nov 5 14:56:54.329871 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 14:56:54.329952 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 5 14:56:54.329962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 14:56:54.329970 kernel: thunder_xcv, ver 1.0 Nov 5 14:56:54.329978 kernel: thunder_bgx, ver 1.0 Nov 5 14:56:54.329986 kernel: nicpf, ver 1.0 Nov 5 14:56:54.329995 kernel: nicvf, ver 1.0 Nov 5 14:56:54.330088 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 14:56:54.330165 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T14:56:53 UTC (1762354613) Nov 5 14:56:54.330183 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 14:56:54.330191 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 14:56:54.330198 kernel: watchdog: NMI not fully supported Nov 5 14:56:54.330206 kernel: watchdog: Hard watchdog permanently disabled Nov 5 14:56:54.330216 kernel: NET: Registered PF_INET6 protocol family Nov 5 14:56:54.330224 kernel: Segment Routing with IPv6 Nov 5 14:56:54.330231 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 14:56:54.330239 kernel: NET: Registered PF_PACKET protocol family Nov 5 14:56:54.330246 kernel: Key type dns_resolver registered Nov 5 14:56:54.330254 kernel: registered taskstats version 1 Nov 5 14:56:54.330261 kernel: Loading compiled-in X.509 certificates Nov 5 14:56:54.330270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4b3babb46eb583bd8b0310732885d24e60ea58c5' Nov 5 14:56:54.330277 kernel: Demotion targets for Node 0: null Nov 5 14:56:54.330285 kernel: Key type .fscrypt registered Nov 5 14:56:54.330293 kernel: Key type fscrypt-provisioning registered Nov 5 14:56:54.330300 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 14:56:54.330308 kernel: ima: Allocated hash algorithm: sha1 Nov 5 14:56:54.330315 kernel: ima: No architecture policies found Nov 5 14:56:54.330324 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 14:56:54.330332 kernel: clk: Disabling unused clocks Nov 5 14:56:54.330339 kernel: PM: genpd: Disabling unused power domains Nov 5 14:56:54.330347 kernel: Freeing unused kernel memory: 12992K Nov 5 14:56:54.330354 kernel: Run /init as init process Nov 5 14:56:54.330362 kernel: with arguments: Nov 5 14:56:54.330369 kernel: /init Nov 5 14:56:54.330378 kernel: with environment: Nov 5 14:56:54.330385 kernel: HOME=/ Nov 5 14:56:54.330392 kernel: TERM=linux Nov 5 14:56:54.330496 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 5 14:56:54.330584 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 14:56:54.330595 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 14:56:54.330610 kernel: GPT:16515071 != 27000831 Nov 5 14:56:54.330620 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 14:56:54.330630 kernel: GPT:16515071 != 27000831 Nov 5 14:56:54.330639 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 14:56:54.330648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 14:56:54.330656 kernel: SCSI subsystem initialized Nov 5 14:56:54.330664 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 14:56:54.330673 kernel: device-mapper: uevent: version 1.0.3 Nov 5 14:56:54.330681 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 14:56:54.330689 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 14:56:54.330697 kernel: raid6: neonx8 gen() 15792 MB/s Nov 5 14:56:54.330704 kernel: raid6: neonx4 gen() 15808 MB/s Nov 5 14:56:54.330712 kernel: raid6: neonx2 gen() 13246 MB/s Nov 5 14:56:54.330719 kernel: raid6: neonx1 gen() 10448 MB/s Nov 5 14:56:54.330727 kernel: raid6: int64x8 gen() 6903 MB/s Nov 5 14:56:54.330736 kernel: raid6: int64x4 gen() 7349 MB/s Nov 5 14:56:54.330743 kernel: raid6: int64x2 gen() 6101 MB/s Nov 5 14:56:54.330751 kernel: raid6: int64x1 gen() 5041 MB/s Nov 5 14:56:54.330758 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s Nov 5 14:56:54.330766 kernel: raid6: .... xor() 12366 MB/s, rmw enabled Nov 5 14:56:54.330774 kernel: raid6: using neon recovery algorithm Nov 5 14:56:54.330781 kernel: xor: measuring software checksum speed Nov 5 14:56:54.330790 kernel: 8regs : 20098 MB/sec Nov 5 14:56:54.330798 kernel: 32regs : 21630 MB/sec Nov 5 14:56:54.330806 kernel: arm64_neon : 27984 MB/sec Nov 5 14:56:54.330813 kernel: xor: using function: arm64_neon (27984 MB/sec) Nov 5 14:56:54.330821 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 14:56:54.330829 kernel: BTRFS: device fsid d8f84a83-fd8b-4c0e-831a-0d7c5ff234be devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (206) Nov 5 14:56:54.330837 kernel: BTRFS info (device dm-0): first mount of filesystem d8f84a83-fd8b-4c0e-831a-0d7c5ff234be Nov 5 14:56:54.330847 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:56:54.330855 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 14:56:54.330863 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 14:56:54.330870 kernel: loop: module loaded Nov 5 14:56:54.330878 kernel: loop0: detected capacity change from 0 to 91464 Nov 5 14:56:54.330886 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 14:56:54.330895 systemd[1]: Successfully made /usr/ read-only. Nov 5 14:56:54.330911 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 14:56:54.330920 systemd[1]: Detected virtualization kvm. Nov 5 14:56:54.330928 systemd[1]: Detected architecture arm64. Nov 5 14:56:54.330936 systemd[1]: Running in initrd. Nov 5 14:56:54.330945 systemd[1]: No hostname configured, using default hostname. Nov 5 14:56:54.330955 systemd[1]: Hostname set to . Nov 5 14:56:54.330963 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 14:56:54.330971 systemd[1]: Queued start job for default target initrd.target. Nov 5 14:56:54.330980 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 14:56:54.330989 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:56:54.330997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:56:54.331007 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 14:56:54.331017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 14:56:54.331026 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 14:56:54.331034 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 14:56:54.331043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:56:54.331051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:56:54.331059 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 14:56:54.331069 systemd[1]: Reached target paths.target - Path Units. Nov 5 14:56:54.331077 systemd[1]: Reached target slices.target - Slice Units. Nov 5 14:56:54.331086 systemd[1]: Reached target swap.target - Swaps. Nov 5 14:56:54.331094 systemd[1]: Reached target timers.target - Timer Units. Nov 5 14:56:54.331103 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 14:56:54.331111 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 14:56:54.331119 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 14:56:54.331129 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 14:56:54.331137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:56:54.331145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 14:56:54.331153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:56:54.331176 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 14:56:54.331187 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 14:56:54.331196 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 14:56:54.331204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 14:56:54.331213 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 14:56:54.331222 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 14:56:54.331231 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 14:56:54.331241 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 14:56:54.331250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 14:56:54.331259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:56:54.331268 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 14:56:54.331278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:56:54.331287 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 14:56:54.331296 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 14:56:54.331325 systemd-journald[347]: Collecting audit messages is disabled. Nov 5 14:56:54.331347 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 14:56:54.331355 kernel: Bridge firewalling registered Nov 5 14:56:54.331364 systemd-journald[347]: Journal started Nov 5 14:56:54.331382 systemd-journald[347]: Runtime Journal (/run/log/journal/033a77110d5146cdbaf60ec955e6f0a6) is 6M, max 48.5M, 42.4M free. Nov 5 14:56:54.331350 systemd-modules-load[348]: Inserted module 'br_netfilter' Nov 5 14:56:54.333526 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 14:56:54.335386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 14:56:54.337994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:56:54.343129 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 14:56:54.344933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:56:54.348416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 14:56:54.357130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 14:56:54.360357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 14:56:54.366820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:56:54.369548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 14:56:54.371161 systemd-tmpfiles[367]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 14:56:54.375504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:56:54.385137 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:56:54.386880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 14:56:54.390715 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 14:56:54.420815 systemd-resolved[376]: Positive Trust Anchors: Nov 5 14:56:54.420828 systemd-resolved[376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 14:56:54.420831 systemd-resolved[376]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 14:56:54.420861 systemd-resolved[376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 14:56:54.434429 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 14:56:54.443151 systemd-resolved[376]: Defaulting to hostname 'linux'. Nov 5 14:56:54.444119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 14:56:54.445921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:56:54.502613 kernel: Loading iSCSI transport class v2.0-870. Nov 5 14:56:54.510604 kernel: iscsi: registered transport (tcp) Nov 5 14:56:54.524615 kernel: iscsi: registered transport (qla4xxx) Nov 5 14:56:54.524643 kernel: QLogic iSCSI HBA Driver Nov 5 14:56:54.547153 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 14:56:54.572198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:56:54.574442 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 14:56:54.616932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 14:56:54.619304 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 14:56:54.621018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 14:56:54.651306 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 14:56:54.654738 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:56:54.683427 systemd-udevd[627]: Using default interface naming scheme 'v257'. Nov 5 14:56:54.691021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:56:54.693559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 14:56:54.714222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 14:56:54.717296 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 14:56:54.719349 dracut-pre-trigger[704]: rd.md=0: removing MD RAID activation Nov 5 14:56:54.739671 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 14:56:54.741778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 14:56:54.759122 systemd-networkd[741]: lo: Link UP Nov 5 14:56:54.759129 systemd-networkd[741]: lo: Gained carrier Nov 5 14:56:54.759748 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 14:56:54.761093 systemd[1]: Reached target network.target - Network. Nov 5 14:56:54.799866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:56:54.804699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 14:56:54.852312 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 14:56:54.868162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 14:56:54.875938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 14:56:54.883361 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 14:56:54.893392 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 14:56:54.894724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 14:56:54.894794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:56:54.895523 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:56:54.895527 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 14:56:54.897010 systemd-networkd[741]: eth0: Link UP Nov 5 14:56:54.897188 systemd-networkd[741]: eth0: Gained carrier Nov 5 14:56:54.897199 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:56:54.898637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:56:54.913215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:56:54.914671 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 14:56:54.917640 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 14:56:54.920540 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 14:56:54.922009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:56:54.927833 disk-uuid[807]: Primary Header is updated. Nov 5 14:56:54.927833 disk-uuid[807]: Secondary Entries is updated. Nov 5 14:56:54.927833 disk-uuid[807]: Secondary Header is updated. Nov 5 14:56:54.925732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 14:56:54.930333 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 14:56:54.933171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:56:54.959709 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 14:56:55.946696 disk-uuid[812]: Warning: The kernel is still using the old partition table. Nov 5 14:56:55.946696 disk-uuid[812]: The new table will be used at the next reboot or after you Nov 5 14:56:55.946696 disk-uuid[812]: run partprobe(8) or kpartx(8) Nov 5 14:56:55.946696 disk-uuid[812]: The operation has completed successfully. Nov 5 14:56:55.952874 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 14:56:55.953656 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 14:56:55.956152 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 14:56:55.983604 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Nov 5 14:56:55.986301 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:56:55.986320 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:56:55.989334 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:56:55.989355 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:56:55.995594 kernel: BTRFS info (device vda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:56:55.996249 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 14:56:55.998411 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 14:56:56.106499 ignition[854]: Ignition 2.22.0 Nov 5 14:56:56.106516 ignition[854]: Stage: fetch-offline Nov 5 14:56:56.106573 ignition[854]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:56:56.106606 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:56:56.106695 ignition[854]: parsed url from cmdline: "" Nov 5 14:56:56.106698 ignition[854]: no config URL provided Nov 5 14:56:56.106702 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 14:56:56.106711 ignition[854]: no config at "/usr/lib/ignition/user.ign" Nov 5 14:56:56.106751 ignition[854]: op(1): [started] loading QEMU firmware config module Nov 5 14:56:56.106757 ignition[854]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 14:56:56.112207 ignition[854]: op(1): [finished] loading QEMU firmware config module Nov 5 14:56:56.154001 systemd-networkd[741]: eth0: Gained IPv6LL Nov 5 14:56:56.162041 ignition[854]: parsing config with SHA512: e31d0901d78fd68f7f7370fb6ccba5fca3898baa3bccab5b179d67f497b46fb4a87f7bf8a9a0e60570622fd27682e0cf18227ed98fcf4e07ff727e6a68a486dc Nov 5 14:56:56.167855 unknown[854]: fetched base config from "system" Nov 5 14:56:56.168775 unknown[854]: fetched user config from "qemu" Nov 5 14:56:56.169178 ignition[854]: fetch-offline: fetch-offline passed Nov 5 14:56:56.169253 ignition[854]: Ignition finished successfully Nov 5 14:56:56.172588 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 14:56:56.173941 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 14:56:56.174779 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 14:56:56.209954 ignition[871]: Ignition 2.22.0 Nov 5 14:56:56.209973 ignition[871]: Stage: kargs Nov 5 14:56:56.210115 ignition[871]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:56:56.210123 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:56:56.210908 ignition[871]: kargs: kargs passed Nov 5 14:56:56.210959 ignition[871]: Ignition finished successfully Nov 5 14:56:56.216270 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 14:56:56.218476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 14:56:56.248554 ignition[878]: Ignition 2.22.0 Nov 5 14:56:56.248568 ignition[878]: Stage: disks Nov 5 14:56:56.248719 ignition[878]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:56:56.251826 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 14:56:56.248728 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:56:56.253025 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 14:56:56.249473 ignition[878]: disks: disks passed Nov 5 14:56:56.254735 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 14:56:56.249520 ignition[878]: Ignition finished successfully Nov 5 14:56:56.256801 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 14:56:56.258522 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 14:56:56.260058 systemd[1]: Reached target basic.target - Basic System. Nov 5 14:56:56.262892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 14:56:56.294877 systemd-fsck[888]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 14:56:56.300075 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 14:56:56.304133 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 14:56:56.372603 kernel: EXT4-fs (vda9): mounted filesystem 67ab558f-e1dc-496b-b18a-e9709809a3c4 r/w with ordered data mode. Quota mode: none. Nov 5 14:56:56.372897 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 14:56:56.374193 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 14:56:56.376617 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 14:56:56.378245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 14:56:56.379341 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 14:56:56.379375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 14:56:56.379401 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 14:56:56.393358 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 14:56:56.396838 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 14:56:56.401887 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (896) Nov 5 14:56:56.401909 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:56:56.401919 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:56:56.404594 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:56:56.404622 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:56:56.405762 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 14:56:56.435833 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 14:56:56.440043 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Nov 5 14:56:56.443461 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 14:56:56.446231 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 14:56:56.512982 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 14:56:56.516365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 14:56:56.517996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 14:56:56.532289 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 14:56:56.533938 kernel: BTRFS info (device vda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:56:56.552856 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 14:56:56.567316 ignition[1010]: INFO : Ignition 2.22.0 Nov 5 14:56:56.567316 ignition[1010]: INFO : Stage: mount Nov 5 14:56:56.568994 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:56:56.568994 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:56:56.568994 ignition[1010]: INFO : mount: mount passed Nov 5 14:56:56.568994 ignition[1010]: INFO : Ignition finished successfully Nov 5 14:56:56.570115 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 14:56:56.573895 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 14:56:57.374443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 14:56:57.393569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Nov 5 14:56:57.393628 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:56:57.393640 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:56:57.397241 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:56:57.397277 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:56:57.398736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 14:56:57.425988 ignition[1039]: INFO : Ignition 2.22.0 Nov 5 14:56:57.425988 ignition[1039]: INFO : Stage: files Nov 5 14:56:57.427634 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:56:57.427634 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:56:57.427634 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Nov 5 14:56:57.431045 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 14:56:57.431045 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 14:56:57.431045 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 14:56:57.431045 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 14:56:57.431045 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 14:56:57.430691 unknown[1039]: wrote ssh authorized keys file for user: core Nov 5 14:56:57.439407 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 14:56:57.439407 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 14:56:57.486366 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 14:56:57.658662 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 14:56:57.658662 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 14:56:57.663074 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 5 14:56:58.176490 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 14:56:58.860755 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 14:56:58.860755 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 14:56:58.864711 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 14:56:58.884636 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 14:56:58.888070 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 14:56:58.891648 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 14:56:58.891648 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 14:56:58.891648 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 14:56:58.891648 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 14:56:58.891648 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 14:56:58.891648 ignition[1039]: INFO : files: files passed Nov 5 14:56:58.891648 ignition[1039]: INFO : Ignition finished successfully Nov 5 14:56:58.891030 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 14:56:58.893375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 14:56:58.895326 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 14:56:58.907515 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 14:56:58.907621 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 14:56:58.910684 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 14:56:58.912145 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:56:58.912145 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:56:58.915131 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:56:58.914097 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 14:56:58.916755 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 14:56:58.919477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 14:56:58.981448 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 14:56:58.981563 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 14:56:58.984222 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 14:56:58.985973 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 14:56:58.988141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 14:56:58.994466 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 14:56:59.018794 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 14:56:59.021256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 14:56:59.043862 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 14:56:59.043991 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:56:59.046098 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:56:59.049081 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 14:56:59.050853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 14:56:59.050978 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 14:56:59.053658 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 14:56:59.055665 systemd[1]: Stopped target basic.target - Basic System. Nov 5 14:56:59.057376 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 14:56:59.059089 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 14:56:59.061021 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 14:56:59.062938 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 14:56:59.064858 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 14:56:59.066698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 14:56:59.068812 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 14:56:59.070794 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 14:56:59.072708 systemd[1]: Stopped target swap.target - Swaps. Nov 5 14:56:59.074352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 14:56:59.074481 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 14:56:59.083007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:56:59.088415 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:56:59.090359 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 14:56:59.091345 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:56:59.092663 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 14:56:59.092790 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 14:56:59.100003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 14:56:59.100139 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 14:56:59.102034 systemd[1]: Stopped target paths.target - Path Units. Nov 5 14:56:59.103552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 14:56:59.107607 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:56:59.108809 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 14:56:59.110839 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 14:56:59.112348 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 14:56:59.112432 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 14:56:59.113933 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 14:56:59.114009 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 14:56:59.115530 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 14:56:59.115645 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 14:56:59.117418 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 14:56:59.117519 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 14:56:59.120728 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 14:56:59.122677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 14:56:59.122801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:56:59.130151 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 14:56:59.131046 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 14:56:59.131167 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:56:59.133244 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 14:56:59.133363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:56:59.135590 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 14:56:59.135695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 14:56:59.141910 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 14:56:59.143625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 14:56:59.147608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 14:56:59.149637 ignition[1097]: INFO : Ignition 2.22.0 Nov 5 14:56:59.149637 ignition[1097]: INFO : Stage: umount Nov 5 14:56:59.151351 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:56:59.151351 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:56:59.153674 ignition[1097]: INFO : umount: umount passed Nov 5 14:56:59.153674 ignition[1097]: INFO : Ignition finished successfully Nov 5 14:56:59.153638 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 14:56:59.153725 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 14:56:59.156070 systemd[1]: Stopped target network.target - Network. Nov 5 14:56:59.157025 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 14:56:59.157082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 14:56:59.158886 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 14:56:59.158935 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 14:56:59.163974 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 14:56:59.164022 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 14:56:59.165079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 14:56:59.165121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 14:56:59.168086 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 14:56:59.169165 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 14:56:59.175741 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 14:56:59.175850 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 14:56:59.183042 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 14:56:59.183157 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 14:56:59.186793 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 14:56:59.186866 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 14:56:59.189134 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 14:56:59.190308 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 14:56:59.190345 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:56:59.192440 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 14:56:59.192490 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 14:56:59.195123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 14:56:59.196281 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 14:56:59.196354 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 14:56:59.198421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 14:56:59.198460 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:56:59.200207 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 14:56:59.200248 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 14:56:59.202131 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:56:59.215385 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 14:56:59.215521 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:56:59.218427 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 14:56:59.218495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 14:56:59.220836 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 14:56:59.220869 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:56:59.222632 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 14:56:59.222681 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 14:56:59.225708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 14:56:59.225762 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 14:56:59.228535 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 14:56:59.228605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 14:56:59.245947 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 14:56:59.247005 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 14:56:59.247078 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:56:59.249395 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 14:56:59.249440 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:56:59.251591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 14:56:59.251637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:56:59.254748 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 14:56:59.260719 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 14:56:59.270691 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 14:56:59.270821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 14:56:59.287311 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 14:56:59.289955 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 14:56:59.311907 systemd[1]: Switching root. Nov 5 14:56:59.342872 systemd-journald[347]: Journal stopped Nov 5 14:57:00.116990 systemd-journald[347]: Received SIGTERM from PID 1 (systemd). Nov 5 14:57:00.117042 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 14:57:00.117057 kernel: SELinux: policy capability open_perms=1 Nov 5 14:57:00.117067 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 14:57:00.117077 kernel: SELinux: policy capability always_check_network=0 Nov 5 14:57:00.117086 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 14:57:00.117099 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 14:57:00.117116 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 14:57:00.117132 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 14:57:00.117145 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 14:57:00.117155 kernel: audit: type=1403 audit(1762354619.510:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 14:57:00.117169 systemd[1]: Successfully loaded SELinux policy in 65.970ms. Nov 5 14:57:00.117184 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.567ms. Nov 5 14:57:00.117197 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 14:57:00.117208 systemd[1]: Detected virtualization kvm. Nov 5 14:57:00.117220 systemd[1]: Detected architecture arm64. Nov 5 14:57:00.117230 systemd[1]: Detected first boot. Nov 5 14:57:00.117241 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 14:57:00.117252 zram_generator::config[1143]: No configuration found. Nov 5 14:57:00.117264 kernel: NET: Registered PF_VSOCK protocol family Nov 5 14:57:00.117283 systemd[1]: Populated /etc with preset unit settings. Nov 5 14:57:00.117294 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 14:57:00.117306 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 14:57:00.117316 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 14:57:00.117327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 14:57:00.117338 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 14:57:00.117348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 14:57:00.117359 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 14:57:00.117370 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 14:57:00.117382 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 14:57:00.117393 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 14:57:00.117403 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 14:57:00.117414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:57:00.117425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:57:00.117436 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 14:57:00.117447 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 14:57:00.117459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 14:57:00.117470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 14:57:00.117481 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 14:57:00.117491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:57:00.117502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:57:00.117513 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 14:57:00.117524 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 14:57:00.117535 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 14:57:00.117546 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 14:57:00.117556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:57:00.117567 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 14:57:00.117944 systemd[1]: Reached target slices.target - Slice Units. Nov 5 14:57:00.117966 systemd[1]: Reached target swap.target - Swaps. Nov 5 14:57:00.117982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 14:57:00.117993 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 14:57:00.118005 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 14:57:00.118015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:57:00.118027 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 14:57:00.118037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:57:00.118049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 14:57:00.118075 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 14:57:00.118087 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 14:57:00.118098 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 14:57:00.118109 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 14:57:00.118120 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 14:57:00.118131 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 14:57:00.118143 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 14:57:00.118156 systemd[1]: Reached target machines.target - Containers. Nov 5 14:57:00.118168 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 14:57:00.118179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:57:00.118190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 14:57:00.118201 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 14:57:00.118212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:57:00.118225 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 14:57:00.118238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:57:00.118249 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 14:57:00.118260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:57:00.118281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 14:57:00.118295 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 14:57:00.118321 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 14:57:00.118333 kernel: fuse: init (API version 7.41) Nov 5 14:57:00.118345 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 14:57:00.118356 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 14:57:00.118367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:57:00.118379 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 14:57:00.118389 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 14:57:00.118400 kernel: ACPI: bus type drm_connector registered Nov 5 14:57:00.118410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 14:57:00.118422 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 14:57:00.118433 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 14:57:00.118469 systemd-journald[1211]: Collecting audit messages is disabled. Nov 5 14:57:00.118493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 14:57:00.118504 systemd-journald[1211]: Journal started Nov 5 14:57:00.118528 systemd-journald[1211]: Runtime Journal (/run/log/journal/033a77110d5146cdbaf60ec955e6f0a6) is 6M, max 48.5M, 42.4M free. Nov 5 14:56:59.877093 systemd[1]: Queued start job for default target multi-user.target. Nov 5 14:56:59.900596 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 14:56:59.901021 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 14:57:00.124135 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 14:57:00.125184 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 14:57:00.126471 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 14:57:00.127897 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 14:57:00.129035 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 14:57:00.130413 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 14:57:00.131778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 14:57:00.134652 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 14:57:00.136194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:57:00.137825 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 14:57:00.137993 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 14:57:00.140889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:57:00.141075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:57:00.142635 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 14:57:00.142789 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 14:57:00.144164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:57:00.144338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:57:00.145904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 14:57:00.146063 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 14:57:00.147568 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:57:00.147749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:57:00.149229 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 14:57:00.152659 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:57:00.155122 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 14:57:00.157074 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 14:57:00.169219 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 14:57:00.170762 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 14:57:00.173065 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 14:57:00.175114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 14:57:00.176337 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 14:57:00.176377 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 14:57:00.178257 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 14:57:00.179739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:57:00.187420 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 14:57:00.189614 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 14:57:00.190778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 14:57:00.191692 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 14:57:00.192812 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 14:57:00.196722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:57:00.197421 systemd-journald[1211]: Time spent on flushing to /var/log/journal/033a77110d5146cdbaf60ec955e6f0a6 is 15.836ms for 865 entries. Nov 5 14:57:00.197421 systemd-journald[1211]: System Journal (/var/log/journal/033a77110d5146cdbaf60ec955e6f0a6) is 8M, max 163.5M, 155.5M free. Nov 5 14:57:00.224259 systemd-journald[1211]: Received client request to flush runtime journal. Nov 5 14:57:00.224317 kernel: loop1: detected capacity change from 0 to 119344 Nov 5 14:57:00.200135 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 14:57:00.203721 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 14:57:00.206319 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:57:00.207928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 14:57:00.209346 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 14:57:00.212675 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 14:57:00.215416 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 14:57:00.218689 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 14:57:00.221158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:57:00.227490 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 14:57:00.240169 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 14:57:00.243029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 14:57:00.245959 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 14:57:00.248593 kernel: loop2: detected capacity change from 0 to 211168 Nov 5 14:57:00.258875 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 14:57:00.261633 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 14:57:00.270795 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Nov 5 14:57:00.270809 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Nov 5 14:57:00.275693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:57:00.288597 kernel: loop3: detected capacity change from 0 to 100624 Nov 5 14:57:00.307103 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 14:57:00.313593 kernel: loop4: detected capacity change from 0 to 119344 Nov 5 14:57:00.319602 kernel: loop5: detected capacity change from 0 to 211168 Nov 5 14:57:00.326660 kernel: loop6: detected capacity change from 0 to 100624 Nov 5 14:57:00.330958 (sd-merge)[1287]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 14:57:00.333750 (sd-merge)[1287]: Merged extensions into '/usr'. Nov 5 14:57:00.337198 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 14:57:00.337213 systemd[1]: Reloading... Nov 5 14:57:00.362600 systemd-resolved[1275]: Positive Trust Anchors: Nov 5 14:57:00.362615 systemd-resolved[1275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 14:57:00.362618 systemd-resolved[1275]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 14:57:00.362648 systemd-resolved[1275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 14:57:00.370105 systemd-resolved[1275]: Defaulting to hostname 'linux'. Nov 5 14:57:00.387643 zram_generator::config[1317]: No configuration found. Nov 5 14:57:00.519369 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 14:57:00.519487 systemd[1]: Reloading finished in 181 ms. Nov 5 14:57:00.536565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 14:57:00.538021 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 14:57:00.541298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:57:00.562807 systemd[1]: Starting ensure-sysext.service... Nov 5 14:57:00.564724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 14:57:00.576016 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Nov 5 14:57:00.576031 systemd[1]: Reloading... Nov 5 14:57:00.579420 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 14:57:00.579450 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 14:57:00.579777 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 14:57:00.579972 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 14:57:00.580588 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 14:57:00.580781 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Nov 5 14:57:00.580827 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Nov 5 14:57:00.584114 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 14:57:00.584132 systemd-tmpfiles[1351]: Skipping /boot Nov 5 14:57:00.589968 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 14:57:00.589980 systemd-tmpfiles[1351]: Skipping /boot Nov 5 14:57:00.629613 zram_generator::config[1381]: No configuration found. Nov 5 14:57:00.755472 systemd[1]: Reloading finished in 179 ms. Nov 5 14:57:00.774160 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 14:57:00.792135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:57:00.799949 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:57:00.801990 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 14:57:00.830854 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 14:57:00.835825 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 14:57:00.838728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:57:00.841808 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 14:57:00.846977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:57:00.848136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:57:00.855948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:57:00.859011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:57:00.861858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:57:00.861991 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:57:00.863116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:57:00.863275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:57:00.865207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:57:00.865368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:57:00.868263 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:57:00.868428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:57:00.872985 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 14:57:00.881552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:57:00.882834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:57:00.885654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:57:00.888919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:57:00.890540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:57:00.890704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:57:00.890796 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 14:57:00.894198 augenrules[1452]: No rules Nov 5 14:57:00.895886 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 14:57:00.897476 systemd-udevd[1425]: Using default interface naming scheme 'v257'. Nov 5 14:57:00.898560 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:57:00.899295 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:57:00.901198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:57:00.901341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:57:00.904372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:57:00.904545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:57:00.906715 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:57:00.906852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:57:00.908779 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 14:57:00.915087 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:57:00.922258 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:57:00.923442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:57:00.924856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:57:00.927842 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 14:57:00.936544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:57:00.941412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:57:00.943843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:57:00.943969 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:57:00.947298 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 14:57:00.948513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 14:57:00.952987 systemd[1]: Finished ensure-sysext.service. Nov 5 14:57:00.967736 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 14:57:00.993205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:57:00.993749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:57:00.996463 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:57:00.996974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:57:00.999889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:57:01.000214 augenrules[1477]: /sbin/augenrules: No change Nov 5 14:57:01.000469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:57:01.003319 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 14:57:01.003470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 14:57:01.008568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 14:57:01.009265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 14:57:01.011557 augenrules[1517]: No rules Nov 5 14:57:01.028266 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:57:01.030622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:57:01.048322 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 14:57:01.051079 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 14:57:01.066223 systemd-networkd[1487]: lo: Link UP Nov 5 14:57:01.066234 systemd-networkd[1487]: lo: Gained carrier Nov 5 14:57:01.067168 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 14:57:01.069903 systemd[1]: Reached target network.target - Network. Nov 5 14:57:01.071518 systemd-networkd[1487]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:57:01.071526 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 14:57:01.073337 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 14:57:01.077768 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 14:57:01.080790 systemd-networkd[1487]: eth0: Link UP Nov 5 14:57:01.080926 systemd-networkd[1487]: eth0: Gained carrier Nov 5 14:57:01.080946 systemd-networkd[1487]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:57:01.084687 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 14:57:01.095668 systemd-networkd[1487]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 14:57:01.096432 systemd-timesyncd[1494]: Network configuration changed, trying to establish connection. Nov 5 14:57:01.097779 systemd-timesyncd[1494]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 14:57:01.097839 systemd-timesyncd[1494]: Initial clock synchronization to Wed 2025-11-05 14:57:01.414937 UTC. Nov 5 14:57:01.104709 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 14:57:01.114432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 14:57:01.117255 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 14:57:01.119258 ldconfig[1419]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 14:57:01.125955 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 14:57:01.128495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 14:57:01.142822 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 14:57:01.150747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 14:57:01.153873 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 14:57:01.155048 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 14:57:01.156802 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 14:57:01.158804 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 14:57:01.159924 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 14:57:01.161561 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 14:57:01.162852 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 14:57:01.162888 systemd[1]: Reached target paths.target - Path Units. Nov 5 14:57:01.163880 systemd[1]: Reached target timers.target - Timer Units. Nov 5 14:57:01.165450 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 14:57:01.167775 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 14:57:01.171675 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 14:57:01.173159 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 14:57:01.176279 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 14:57:01.187276 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 14:57:01.188701 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 14:57:01.190414 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 14:57:01.197469 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 14:57:01.198507 systemd[1]: Reached target basic.target - Basic System. Nov 5 14:57:01.199519 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 14:57:01.199553 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 14:57:01.200542 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 14:57:01.202422 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 14:57:01.204338 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 14:57:01.211327 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 14:57:01.213268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 14:57:01.214304 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 14:57:01.215252 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 14:57:01.217403 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 14:57:01.218733 jq[1557]: false Nov 5 14:57:01.221249 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 14:57:01.223934 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 14:57:01.227264 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 14:57:01.231682 extend-filesystems[1558]: Found /dev/vda6 Nov 5 14:57:01.231763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:57:01.232909 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 14:57:01.233304 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 14:57:01.234040 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 14:57:01.235969 extend-filesystems[1558]: Found /dev/vda9 Nov 5 14:57:01.238273 extend-filesystems[1558]: Checking size of /dev/vda9 Nov 5 14:57:01.241858 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 14:57:01.246618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 14:57:01.249433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 14:57:01.249659 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 14:57:01.249908 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 14:57:01.250065 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 14:57:01.252313 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 14:57:01.253227 jq[1579]: true Nov 5 14:57:01.252508 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 14:57:01.262152 update_engine[1576]: I20251105 14:57:01.261941 1576 main.cc:92] Flatcar Update Engine starting Nov 5 14:57:01.266867 extend-filesystems[1558]: Resized partition /dev/vda9 Nov 5 14:57:01.268245 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 14:57:01.272010 extend-filesystems[1604]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 14:57:01.275470 jq[1591]: true Nov 5 14:57:01.281586 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 14:57:01.284382 tar[1586]: linux-arm64/LICENSE Nov 5 14:57:01.284382 tar[1586]: linux-arm64/helm Nov 5 14:57:01.291067 dbus-daemon[1555]: [system] SELinux support is enabled Nov 5 14:57:01.291274 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 14:57:01.295818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 14:57:01.295852 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 14:57:01.297548 update_engine[1576]: I20251105 14:57:01.296728 1576 update_check_scheduler.cc:74] Next update check in 6m37s Nov 5 14:57:01.298252 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 14:57:01.298278 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 14:57:01.303436 systemd[1]: Started update-engine.service - Update Engine. Nov 5 14:57:01.307621 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 14:57:01.337597 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 14:57:01.353137 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 14:57:01.353137 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 14:57:01.353137 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 14:57:01.365654 extend-filesystems[1558]: Resized filesystem in /dev/vda9 Nov 5 14:57:01.355036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:57:01.370322 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Nov 5 14:57:01.359971 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 14:57:01.360182 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 14:57:01.360425 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 14:57:01.361491 systemd-logind[1568]: New seat seat0. Nov 5 14:57:01.363382 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 14:57:01.365216 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 14:57:01.368200 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 14:57:01.402346 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 14:57:01.437291 containerd[1592]: time="2025-11-05T14:57:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 14:57:01.438594 containerd[1592]: time="2025-11-05T14:57:01.438070720Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 14:57:01.450714 containerd[1592]: time="2025-11-05T14:57:01.450675680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.24µs" Nov 5 14:57:01.450801 containerd[1592]: time="2025-11-05T14:57:01.450786560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 14:57:01.450871 containerd[1592]: time="2025-11-05T14:57:01.450858240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 14:57:01.451051 containerd[1592]: time="2025-11-05T14:57:01.451033480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 14:57:01.451108 containerd[1592]: time="2025-11-05T14:57:01.451096120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 14:57:01.451181 containerd[1592]: time="2025-11-05T14:57:01.451168280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451286 containerd[1592]: time="2025-11-05T14:57:01.451267840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451354 containerd[1592]: time="2025-11-05T14:57:01.451339720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451630 containerd[1592]: time="2025-11-05T14:57:01.451607120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451692 containerd[1592]: time="2025-11-05T14:57:01.451679160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451741 containerd[1592]: time="2025-11-05T14:57:01.451728640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451805 containerd[1592]: time="2025-11-05T14:57:01.451792440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 14:57:01.451927 containerd[1592]: time="2025-11-05T14:57:01.451910360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 14:57:01.452172 containerd[1592]: time="2025-11-05T14:57:01.452149000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 14:57:01.452256 containerd[1592]: time="2025-11-05T14:57:01.452240320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 14:57:01.452332 containerd[1592]: time="2025-11-05T14:57:01.452316400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 14:57:01.452423 containerd[1592]: time="2025-11-05T14:57:01.452407320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 14:57:01.452734 containerd[1592]: time="2025-11-05T14:57:01.452715840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 14:57:01.452871 containerd[1592]: time="2025-11-05T14:57:01.452850280Z" level=info msg="metadata content store policy set" policy=shared Nov 5 14:57:01.456463 containerd[1592]: time="2025-11-05T14:57:01.456436840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 14:57:01.456624 containerd[1592]: time="2025-11-05T14:57:01.456570360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 14:57:01.456703 containerd[1592]: time="2025-11-05T14:57:01.456671040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 14:57:01.456726 containerd[1592]: time="2025-11-05T14:57:01.456704400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 14:57:01.456726 containerd[1592]: time="2025-11-05T14:57:01.456718880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 14:57:01.456776 containerd[1592]: time="2025-11-05T14:57:01.456739440Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 14:57:01.456776 containerd[1592]: time="2025-11-05T14:57:01.456752400Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 14:57:01.456776 containerd[1592]: time="2025-11-05T14:57:01.456763640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 14:57:01.456776 containerd[1592]: time="2025-11-05T14:57:01.456774960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 14:57:01.456837 containerd[1592]: time="2025-11-05T14:57:01.456785080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 14:57:01.456837 containerd[1592]: time="2025-11-05T14:57:01.456794920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 14:57:01.456837 containerd[1592]: time="2025-11-05T14:57:01.456806560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 14:57:01.456950 containerd[1592]: time="2025-11-05T14:57:01.456930840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 14:57:01.456974 containerd[1592]: time="2025-11-05T14:57:01.456957880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 14:57:01.456991 containerd[1592]: time="2025-11-05T14:57:01.456979040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 14:57:01.457008 containerd[1592]: time="2025-11-05T14:57:01.456991040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 14:57:01.457008 containerd[1592]: time="2025-11-05T14:57:01.457001880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 14:57:01.457040 containerd[1592]: time="2025-11-05T14:57:01.457012360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 14:57:01.457040 containerd[1592]: time="2025-11-05T14:57:01.457023560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 14:57:01.457040 containerd[1592]: time="2025-11-05T14:57:01.457033320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 14:57:01.457095 containerd[1592]: time="2025-11-05T14:57:01.457047480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 14:57:01.457095 containerd[1592]: time="2025-11-05T14:57:01.457059080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 14:57:01.457095 containerd[1592]: time="2025-11-05T14:57:01.457068800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 14:57:01.457263 containerd[1592]: time="2025-11-05T14:57:01.457248600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 14:57:01.457282 containerd[1592]: time="2025-11-05T14:57:01.457268880Z" level=info msg="Start snapshots syncer" Nov 5 14:57:01.457313 containerd[1592]: time="2025-11-05T14:57:01.457302760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 14:57:01.457556 containerd[1592]: time="2025-11-05T14:57:01.457521040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 14:57:01.457671 containerd[1592]: time="2025-11-05T14:57:01.457593560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 14:57:01.457696 containerd[1592]: time="2025-11-05T14:57:01.457667720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 14:57:01.457794 containerd[1592]: time="2025-11-05T14:57:01.457767640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 14:57:01.457824 containerd[1592]: time="2025-11-05T14:57:01.457796800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 14:57:01.457824 containerd[1592]: time="2025-11-05T14:57:01.457809640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 14:57:01.457857 containerd[1592]: time="2025-11-05T14:57:01.457825800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 14:57:01.457857 containerd[1592]: time="2025-11-05T14:57:01.457837760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 14:57:01.457857 containerd[1592]: time="2025-11-05T14:57:01.457847880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 14:57:01.457913 containerd[1592]: time="2025-11-05T14:57:01.457858520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 14:57:01.457913 containerd[1592]: time="2025-11-05T14:57:01.457882600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 14:57:01.457913 containerd[1592]: time="2025-11-05T14:57:01.457894280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 14:57:01.457913 containerd[1592]: time="2025-11-05T14:57:01.457905000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 14:57:01.457979 containerd[1592]: time="2025-11-05T14:57:01.457929000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 14:57:01.457979 containerd[1592]: time="2025-11-05T14:57:01.457942280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 14:57:01.457979 containerd[1592]: time="2025-11-05T14:57:01.457950720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 14:57:01.457979 containerd[1592]: time="2025-11-05T14:57:01.457960200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 14:57:01.457979 containerd[1592]: time="2025-11-05T14:57:01.457967520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 14:57:01.457979 containerd[1592]: time="2025-11-05T14:57:01.457976720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 14:57:01.458075 containerd[1592]: time="2025-11-05T14:57:01.457987600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 14:57:01.458075 containerd[1592]: time="2025-11-05T14:57:01.458064960Z" level=info msg="runtime interface created" Nov 5 14:57:01.458075 containerd[1592]: time="2025-11-05T14:57:01.458070680Z" level=info msg="created NRI interface" Nov 5 14:57:01.458125 containerd[1592]: time="2025-11-05T14:57:01.458078760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 14:57:01.458125 containerd[1592]: time="2025-11-05T14:57:01.458089640Z" level=info msg="Connect containerd service" Nov 5 14:57:01.458125 containerd[1592]: time="2025-11-05T14:57:01.458118360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 14:57:01.458883 containerd[1592]: time="2025-11-05T14:57:01.458858560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.524417280Z" level=info msg="Start subscribing containerd event" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.524979160Z" level=info msg="Start recovering state" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525075240Z" level=info msg="Start event monitor" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525089400Z" level=info msg="Start cni network conf syncer for default" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525102520Z" level=info msg="Start streaming server" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525111080Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525120720Z" level=info msg="runtime interface starting up..." Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525129840Z" level=info msg="starting plugins..." Nov 5 14:57:01.525365 containerd[1592]: time="2025-11-05T14:57:01.525143280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 14:57:01.525858 containerd[1592]: time="2025-11-05T14:57:01.525830680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 14:57:01.526087 containerd[1592]: time="2025-11-05T14:57:01.526070880Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 14:57:01.526923 containerd[1592]: time="2025-11-05T14:57:01.526854160Z" level=info msg="containerd successfully booted in 0.089902s" Nov 5 14:57:01.526969 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 14:57:01.630687 tar[1586]: linux-arm64/README.md Nov 5 14:57:01.647617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 14:57:02.214086 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 14:57:02.233555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 14:57:02.237005 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 14:57:02.256101 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 14:57:02.257743 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 14:57:02.260525 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 14:57:02.279671 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 14:57:02.282507 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 14:57:02.286054 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 14:57:02.287471 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 14:57:02.810364 systemd-networkd[1487]: eth0: Gained IPv6LL Nov 5 14:57:02.814678 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 14:57:02.816562 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 14:57:02.819119 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 14:57:02.822233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:02.829208 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 14:57:02.853916 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 14:57:02.856075 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 14:57:02.857651 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 14:57:02.860181 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 14:57:03.434779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:03.437446 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 14:57:03.441901 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 14:57:03.443708 systemd[1]: Startup finished in 1.170s (kernel) + 5.416s (initrd) + 3.999s (userspace) = 10.586s. Nov 5 14:57:03.818961 kubelet[1700]: E1105 14:57:03.818784 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 14:57:03.821184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 14:57:03.821462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 14:57:03.822116 systemd[1]: kubelet.service: Consumed 754ms CPU time, 258.9M memory peak. Nov 5 14:57:05.741237 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 14:57:05.743094 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:42722.service - OpenSSH per-connection server daemon (10.0.0.1:42722). Nov 5 14:57:05.827264 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 42722 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:05.828916 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:05.840867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 14:57:05.843703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 14:57:05.853590 systemd-logind[1568]: New session 1 of user core. Nov 5 14:57:05.872296 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 14:57:05.874783 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 14:57:05.897561 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 14:57:05.901873 systemd-logind[1568]: New session c1 of user core. Nov 5 14:57:06.016406 systemd[1718]: Queued start job for default target default.target. Nov 5 14:57:06.037585 systemd[1718]: Created slice app.slice - User Application Slice. Nov 5 14:57:06.037639 systemd[1718]: Reached target paths.target - Paths. Nov 5 14:57:06.037681 systemd[1718]: Reached target timers.target - Timers. Nov 5 14:57:06.038924 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 14:57:06.048446 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 14:57:06.048502 systemd[1718]: Reached target sockets.target - Sockets. Nov 5 14:57:06.048537 systemd[1718]: Reached target basic.target - Basic System. Nov 5 14:57:06.048569 systemd[1718]: Reached target default.target - Main User Target. Nov 5 14:57:06.048626 systemd[1718]: Startup finished in 141ms. Nov 5 14:57:06.048769 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 14:57:06.050007 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 14:57:06.110711 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:42734.service - OpenSSH per-connection server daemon (10.0.0.1:42734). Nov 5 14:57:06.159606 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 42734 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:06.160846 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:06.164928 systemd-logind[1568]: New session 2 of user core. Nov 5 14:57:06.173750 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 14:57:06.225558 sshd[1732]: Connection closed by 10.0.0.1 port 42734 Nov 5 14:57:06.225884 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Nov 5 14:57:06.235636 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:42734.service: Deactivated successfully. Nov 5 14:57:06.237073 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 14:57:06.239189 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Nov 5 14:57:06.241460 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:42748.service - OpenSSH per-connection server daemon (10.0.0.1:42748). Nov 5 14:57:06.242168 systemd-logind[1568]: Removed session 2. Nov 5 14:57:06.298026 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 42748 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:06.299098 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:06.303386 systemd-logind[1568]: New session 3 of user core. Nov 5 14:57:06.314832 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 14:57:06.364358 sshd[1741]: Connection closed by 10.0.0.1 port 42748 Nov 5 14:57:06.365012 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Nov 5 14:57:06.377740 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:42748.service: Deactivated successfully. Nov 5 14:57:06.379961 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 14:57:06.380626 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Nov 5 14:57:06.383085 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:42754.service - OpenSSH per-connection server daemon (10.0.0.1:42754). Nov 5 14:57:06.383711 systemd-logind[1568]: Removed session 3. Nov 5 14:57:06.443504 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 42754 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:06.444826 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:06.449236 systemd-logind[1568]: New session 4 of user core. Nov 5 14:57:06.463766 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 14:57:06.519168 sshd[1750]: Connection closed by 10.0.0.1 port 42754 Nov 5 14:57:06.519503 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Nov 5 14:57:06.529515 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:42754.service: Deactivated successfully. Nov 5 14:57:06.531847 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 14:57:06.532458 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Nov 5 14:57:06.534828 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:42770.service - OpenSSH per-connection server daemon (10.0.0.1:42770). Nov 5 14:57:06.535335 systemd-logind[1568]: Removed session 4. Nov 5 14:57:06.594865 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:06.596165 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:06.603638 systemd-logind[1568]: New session 5 of user core. Nov 5 14:57:06.615865 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 14:57:06.684409 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 14:57:06.686503 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:57:06.698468 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 5 14:57:06.700773 sshd[1760]: Connection closed by 10.0.0.1 port 42770 Nov 5 14:57:06.700679 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 5 14:57:06.718530 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:42770.service: Deactivated successfully. Nov 5 14:57:06.721038 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 14:57:06.721684 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Nov 5 14:57:06.723641 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:42776.service - OpenSSH per-connection server daemon (10.0.0.1:42776). Nov 5 14:57:06.724573 systemd-logind[1568]: Removed session 5. Nov 5 14:57:06.783732 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 42776 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:06.784948 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:06.789615 systemd-logind[1568]: New session 6 of user core. Nov 5 14:57:06.799761 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 14:57:06.853065 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 14:57:06.853316 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:57:06.904670 sudo[1772]: pam_unix(sudo:session): session closed for user root Nov 5 14:57:06.914345 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 14:57:06.915028 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:57:06.925998 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:57:06.972452 augenrules[1794]: No rules Nov 5 14:57:06.973576 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:57:06.974780 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:57:06.975784 sudo[1771]: pam_unix(sudo:session): session closed for user root Nov 5 14:57:06.977280 sshd[1770]: Connection closed by 10.0.0.1 port 42776 Nov 5 14:57:06.977842 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Nov 5 14:57:06.989580 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:42776.service: Deactivated successfully. Nov 5 14:57:06.994471 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 14:57:06.995537 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Nov 5 14:57:07.003884 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:42788.service - OpenSSH per-connection server daemon (10.0.0.1:42788). Nov 5 14:57:07.009032 systemd-logind[1568]: Removed session 6. Nov 5 14:57:07.060419 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 42788 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:57:07.061955 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:57:07.066249 systemd-logind[1568]: New session 7 of user core. Nov 5 14:57:07.079769 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 14:57:07.132021 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 14:57:07.132270 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:57:07.422801 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 14:57:07.438942 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 14:57:07.646663 dockerd[1827]: time="2025-11-05T14:57:07.644963494Z" level=info msg="Starting up" Nov 5 14:57:07.647174 dockerd[1827]: time="2025-11-05T14:57:07.647149469Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 14:57:07.657866 dockerd[1827]: time="2025-11-05T14:57:07.657827357Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 14:57:07.792156 dockerd[1827]: time="2025-11-05T14:57:07.791942265Z" level=info msg="Loading containers: start." Nov 5 14:57:07.800611 kernel: Initializing XFRM netlink socket Nov 5 14:57:08.010160 systemd-networkd[1487]: docker0: Link UP Nov 5 14:57:08.014114 dockerd[1827]: time="2025-11-05T14:57:08.014064273Z" level=info msg="Loading containers: done." Nov 5 14:57:08.029213 dockerd[1827]: time="2025-11-05T14:57:08.029138940Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 14:57:08.029391 dockerd[1827]: time="2025-11-05T14:57:08.029234613Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 14:57:08.029391 dockerd[1827]: time="2025-11-05T14:57:08.029341848Z" level=info msg="Initializing buildkit" Nov 5 14:57:08.054058 dockerd[1827]: time="2025-11-05T14:57:08.053944598Z" level=info msg="Completed buildkit initialization" Nov 5 14:57:08.059081 dockerd[1827]: time="2025-11-05T14:57:08.059033296Z" level=info msg="Daemon has completed initialization" Nov 5 14:57:08.059320 dockerd[1827]: time="2025-11-05T14:57:08.059094486Z" level=info msg="API listen on /run/docker.sock" Nov 5 14:57:08.059419 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 14:57:08.769936 containerd[1592]: time="2025-11-05T14:57:08.769873364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 14:57:09.446443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950356294.mount: Deactivated successfully. Nov 5 14:57:10.538376 containerd[1592]: time="2025-11-05T14:57:10.538312079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:10.538777 containerd[1592]: time="2025-11-05T14:57:10.538731071Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Nov 5 14:57:10.539662 containerd[1592]: time="2025-11-05T14:57:10.539639077Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:10.542320 containerd[1592]: time="2025-11-05T14:57:10.542288978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:10.543356 containerd[1592]: time="2025-11-05T14:57:10.543328471Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.773409868s" Nov 5 14:57:10.543407 containerd[1592]: time="2025-11-05T14:57:10.543371165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 5 14:57:10.544711 containerd[1592]: time="2025-11-05T14:57:10.544681175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 14:57:11.968618 containerd[1592]: time="2025-11-05T14:57:11.968564658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:11.970240 containerd[1592]: time="2025-11-05T14:57:11.970196160Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Nov 5 14:57:11.971213 containerd[1592]: time="2025-11-05T14:57:11.971162424Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:11.974170 containerd[1592]: time="2025-11-05T14:57:11.974131769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:11.975913 containerd[1592]: time="2025-11-05T14:57:11.975885754Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.431172561s" Nov 5 14:57:11.975951 containerd[1592]: time="2025-11-05T14:57:11.975920686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 5 14:57:11.976453 containerd[1592]: time="2025-11-05T14:57:11.976430411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 14:57:13.158115 containerd[1592]: time="2025-11-05T14:57:13.158043118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:13.159117 containerd[1592]: time="2025-11-05T14:57:13.159071823Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Nov 5 14:57:13.160233 containerd[1592]: time="2025-11-05T14:57:13.160195386Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:13.163133 containerd[1592]: time="2025-11-05T14:57:13.163095781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:13.164233 containerd[1592]: time="2025-11-05T14:57:13.164103053Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.187643965s" Nov 5 14:57:13.164279 containerd[1592]: time="2025-11-05T14:57:13.164233432Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 5 14:57:13.164735 containerd[1592]: time="2025-11-05T14:57:13.164703282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 14:57:13.969384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 14:57:13.972757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:14.127690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:14.143868 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 14:57:14.186950 kubelet[2125]: E1105 14:57:14.186912 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 14:57:14.190413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 14:57:14.190546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 14:57:14.192661 systemd[1]: kubelet.service: Consumed 154ms CPU time, 106.3M memory peak. Nov 5 14:57:14.280268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425063978.mount: Deactivated successfully. Nov 5 14:57:14.618936 containerd[1592]: time="2025-11-05T14:57:14.618821195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:14.620053 containerd[1592]: time="2025-11-05T14:57:14.620024571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Nov 5 14:57:14.621232 containerd[1592]: time="2025-11-05T14:57:14.620877811Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:14.623114 containerd[1592]: time="2025-11-05T14:57:14.623086069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:14.623640 containerd[1592]: time="2025-11-05T14:57:14.623607037Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.458871795s" Nov 5 14:57:14.623711 containerd[1592]: time="2025-11-05T14:57:14.623639736Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 5 14:57:14.624076 containerd[1592]: time="2025-11-05T14:57:14.624044866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 14:57:15.212062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753996690.mount: Deactivated successfully. Nov 5 14:57:16.165840 containerd[1592]: time="2025-11-05T14:57:16.165791303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:16.167733 containerd[1592]: time="2025-11-05T14:57:16.167691653Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Nov 5 14:57:16.168528 containerd[1592]: time="2025-11-05T14:57:16.168477791Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:16.172317 containerd[1592]: time="2025-11-05T14:57:16.172230158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:16.173059 containerd[1592]: time="2025-11-05T14:57:16.172864252Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.548789043s" Nov 5 14:57:16.173059 containerd[1592]: time="2025-11-05T14:57:16.172904215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 5 14:57:16.173487 containerd[1592]: time="2025-11-05T14:57:16.173378940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 14:57:16.638928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618547013.mount: Deactivated successfully. Nov 5 14:57:16.645637 containerd[1592]: time="2025-11-05T14:57:16.644925880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:57:16.645857 containerd[1592]: time="2025-11-05T14:57:16.645830739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 5 14:57:16.646417 containerd[1592]: time="2025-11-05T14:57:16.646389012Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:57:16.648646 containerd[1592]: time="2025-11-05T14:57:16.648616954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:57:16.649214 containerd[1592]: time="2025-11-05T14:57:16.649184604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.772542ms" Nov 5 14:57:16.649272 containerd[1592]: time="2025-11-05T14:57:16.649217604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 14:57:16.649709 containerd[1592]: time="2025-11-05T14:57:16.649680136Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 14:57:17.705528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650054516.mount: Deactivated successfully. Nov 5 14:57:19.939987 containerd[1592]: time="2025-11-05T14:57:19.939937517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:19.940623 containerd[1592]: time="2025-11-05T14:57:19.940569134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Nov 5 14:57:19.941557 containerd[1592]: time="2025-11-05T14:57:19.941531902Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:19.944625 containerd[1592]: time="2025-11-05T14:57:19.944255494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:19.946360 containerd[1592]: time="2025-11-05T14:57:19.946325339Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.296595235s" Nov 5 14:57:19.946479 containerd[1592]: time="2025-11-05T14:57:19.946462177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 5 14:57:24.236363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 14:57:24.237856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:24.258165 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 14:57:24.258233 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 14:57:24.258623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:24.261378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:24.281457 systemd[1]: Reload requested from client PID 2285 ('systemctl') (unit session-7.scope)... Nov 5 14:57:24.281474 systemd[1]: Reloading... Nov 5 14:57:24.349741 zram_generator::config[2329]: No configuration found. Nov 5 14:57:24.680693 systemd[1]: Reloading finished in 398 ms. Nov 5 14:57:24.731650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:24.733884 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 14:57:24.734077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:24.734120 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.3M memory peak. Nov 5 14:57:24.735401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:24.882628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:24.887173 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 14:57:24.921035 kubelet[2376]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:57:24.921035 kubelet[2376]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 14:57:24.921035 kubelet[2376]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:57:24.921349 kubelet[2376]: I1105 14:57:24.921068 2376 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 14:57:25.698335 kubelet[2376]: I1105 14:57:25.698292 2376 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 14:57:25.698335 kubelet[2376]: I1105 14:57:25.698320 2376 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 14:57:25.698550 kubelet[2376]: I1105 14:57:25.698534 2376 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 14:57:25.720402 kubelet[2376]: E1105 14:57:25.720349 2376 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 14:57:25.721476 kubelet[2376]: I1105 14:57:25.721464 2376 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 14:57:25.730612 kubelet[2376]: I1105 14:57:25.730080 2376 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 14:57:25.732536 kubelet[2376]: I1105 14:57:25.732506 2376 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 14:57:25.733509 kubelet[2376]: I1105 14:57:25.733464 2376 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 14:57:25.733665 kubelet[2376]: I1105 14:57:25.733503 2376 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 14:57:25.733754 kubelet[2376]: I1105 14:57:25.733734 2376 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 14:57:25.733754 kubelet[2376]: I1105 14:57:25.733744 2376 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 14:57:25.733931 kubelet[2376]: I1105 14:57:25.733918 2376 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:57:25.736346 kubelet[2376]: I1105 14:57:25.736302 2376 kubelet.go:480] "Attempting to sync node with API server" Nov 5 14:57:25.736346 kubelet[2376]: I1105 14:57:25.736339 2376 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 14:57:25.736445 kubelet[2376]: I1105 14:57:25.736374 2376 kubelet.go:386] "Adding apiserver pod source" Nov 5 14:57:25.737393 kubelet[2376]: I1105 14:57:25.737380 2376 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 14:57:25.738344 kubelet[2376]: I1105 14:57:25.738325 2376 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 14:57:25.739045 kubelet[2376]: I1105 14:57:25.739006 2376 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 14:57:25.739415 kubelet[2376]: W1105 14:57:25.739139 2376 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 14:57:25.740917 kubelet[2376]: E1105 14:57:25.740787 2376 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 14:57:25.741280 kubelet[2376]: E1105 14:57:25.740805 2376 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 14:57:25.741330 kubelet[2376]: I1105 14:57:25.741317 2376 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 14:57:25.741409 kubelet[2376]: I1105 14:57:25.741401 2376 server.go:1289] "Started kubelet" Nov 5 14:57:25.741754 kubelet[2376]: I1105 14:57:25.741723 2376 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 14:57:25.742352 kubelet[2376]: I1105 14:57:25.742313 2376 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 14:57:25.743243 kubelet[2376]: I1105 14:57:25.743218 2376 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 14:57:25.744564 kubelet[2376]: I1105 14:57:25.742645 2376 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 14:57:25.745781 kubelet[2376]: I1105 14:57:25.742715 2376 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 14:57:25.747598 kubelet[2376]: E1105 14:57:25.745024 2376 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875243683ac767a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 14:57:25.741368954 +0000 UTC m=+0.851104623,LastTimestamp:2025-11-05 14:57:25.741368954 +0000 UTC m=+0.851104623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 14:57:25.747598 kubelet[2376]: I1105 14:57:25.746838 2376 server.go:317] "Adding debug handlers to kubelet server" Nov 5 14:57:25.747598 kubelet[2376]: I1105 14:57:25.746968 2376 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 14:57:25.747598 kubelet[2376]: I1105 14:57:25.747147 2376 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 14:57:25.747598 kubelet[2376]: I1105 14:57:25.747195 2376 reconciler.go:26] "Reconciler: start to sync state" Nov 5 14:57:25.747598 kubelet[2376]: E1105 14:57:25.747522 2376 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 14:57:25.747787 kubelet[2376]: E1105 14:57:25.747630 2376 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 14:57:25.748178 kubelet[2376]: I1105 14:57:25.748143 2376 factory.go:223] Registration of the systemd container factory successfully Nov 5 14:57:25.748693 kubelet[2376]: I1105 14:57:25.748667 2376 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 14:57:25.748846 kubelet[2376]: E1105 14:57:25.748331 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:57:25.749251 kubelet[2376]: E1105 14:57:25.749203 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Nov 5 14:57:25.750091 kubelet[2376]: I1105 14:57:25.750067 2376 factory.go:223] Registration of the containerd container factory successfully Nov 5 14:57:25.763753 kubelet[2376]: I1105 14:57:25.763715 2376 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 14:57:25.763753 kubelet[2376]: I1105 14:57:25.763764 2376 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 14:57:25.763900 kubelet[2376]: I1105 14:57:25.763790 2376 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:57:25.765026 kubelet[2376]: I1105 14:57:25.764990 2376 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 14:57:25.767561 kubelet[2376]: I1105 14:57:25.767439 2376 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 14:57:25.767561 kubelet[2376]: I1105 14:57:25.767465 2376 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 14:57:25.767561 kubelet[2376]: I1105 14:57:25.767482 2376 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 14:57:25.767561 kubelet[2376]: I1105 14:57:25.767488 2376 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 14:57:25.767561 kubelet[2376]: E1105 14:57:25.767528 2376 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 14:57:25.849895 kubelet[2376]: E1105 14:57:25.849837 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:57:25.868083 kubelet[2376]: E1105 14:57:25.868061 2376 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 14:57:25.928245 kubelet[2376]: I1105 14:57:25.928195 2376 policy_none.go:49] "None policy: Start" Nov 5 14:57:25.928245 kubelet[2376]: I1105 14:57:25.928227 2376 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 14:57:25.928245 kubelet[2376]: I1105 14:57:25.928241 2376 state_mem.go:35] "Initializing new in-memory state store" Nov 5 14:57:25.928748 kubelet[2376]: E1105 14:57:25.928707 2376 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 14:57:25.932715 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 14:57:25.949791 kubelet[2376]: E1105 14:57:25.949646 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Nov 5 14:57:25.950890 kubelet[2376]: E1105 14:57:25.950570 2376 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:57:25.953184 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 14:57:25.957220 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 14:57:25.983364 kubelet[2376]: E1105 14:57:25.983332 2376 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 14:57:25.983558 kubelet[2376]: I1105 14:57:25.983537 2376 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 14:57:25.983613 kubelet[2376]: I1105 14:57:25.983556 2376 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 14:57:25.983857 kubelet[2376]: I1105 14:57:25.983810 2376 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 14:57:25.984600 kubelet[2376]: E1105 14:57:25.984560 2376 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 14:57:25.985004 kubelet[2376]: E1105 14:57:25.984958 2376 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 14:57:26.078187 systemd[1]: Created slice kubepods-burstable-pod83075b38e5b456efeeb9ffcf6faa6b79.slice - libcontainer container kubepods-burstable-pod83075b38e5b456efeeb9ffcf6faa6b79.slice. Nov 5 14:57:26.084459 kubelet[2376]: I1105 14:57:26.084428 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:57:26.084866 kubelet[2376]: E1105 14:57:26.084833 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Nov 5 14:57:26.094452 kubelet[2376]: E1105 14:57:26.094250 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:26.096927 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 14:57:26.098731 kubelet[2376]: E1105 14:57:26.098711 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:26.100561 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 14:57:26.101908 kubelet[2376]: E1105 14:57:26.101891 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:26.150305 kubelet[2376]: I1105 14:57:26.150245 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83075b38e5b456efeeb9ffcf6faa6b79-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"83075b38e5b456efeeb9ffcf6faa6b79\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:26.150305 kubelet[2376]: I1105 14:57:26.150280 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:26.150511 kubelet[2376]: I1105 14:57:26.150453 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:26.150511 kubelet[2376]: I1105 14:57:26.150487 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:26.150651 kubelet[2376]: I1105 14:57:26.150601 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83075b38e5b456efeeb9ffcf6faa6b79-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"83075b38e5b456efeeb9ffcf6faa6b79\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:26.150651 kubelet[2376]: I1105 14:57:26.150623 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:26.150802 kubelet[2376]: I1105 14:57:26.150641 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:26.150802 kubelet[2376]: I1105 14:57:26.150770 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:26.150802 kubelet[2376]: I1105 14:57:26.150788 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83075b38e5b456efeeb9ffcf6faa6b79-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"83075b38e5b456efeeb9ffcf6faa6b79\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:26.286479 kubelet[2376]: I1105 14:57:26.286440 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:57:26.286836 kubelet[2376]: E1105 14:57:26.286787 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Nov 5 14:57:26.350587 kubelet[2376]: E1105 14:57:26.350544 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Nov 5 14:57:26.394978 kubelet[2376]: E1105 14:57:26.394945 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.395568 containerd[1592]: time="2025-11-05T14:57:26.395520913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:83075b38e5b456efeeb9ffcf6faa6b79,Namespace:kube-system,Attempt:0,}" Nov 5 14:57:26.399763 kubelet[2376]: E1105 14:57:26.399730 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.400085 containerd[1592]: time="2025-11-05T14:57:26.400058321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 14:57:26.402405 kubelet[2376]: E1105 14:57:26.402311 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.402753 containerd[1592]: time="2025-11-05T14:57:26.402614707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 14:57:26.421872 containerd[1592]: time="2025-11-05T14:57:26.421823719Z" level=info msg="connecting to shim 146d51eb6ced2113bc8d87cd001aff9a6106e9c3789ac0541bb70b54bae8aac9" address="unix:///run/containerd/s/dafe9696a854a68f4a9e0452a91273a23e10e8982492911ffb907b74f9c04a7e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:26.430627 containerd[1592]: time="2025-11-05T14:57:26.430572892Z" level=info msg="connecting to shim 4de5b82f9ef673b1e100a3b156b8d3659536ea7045fa3dabd2ea5f4081fb56ed" address="unix:///run/containerd/s/ad9c071ff18792a0efecc3b267829232c4159bc30b741b9a3cc62b85034cdaf1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:26.437311 containerd[1592]: time="2025-11-05T14:57:26.437157067Z" level=info msg="connecting to shim 6309ef84c46680c9df88e6069898463a1d87072d390bd00a6c75d042a4b00785" address="unix:///run/containerd/s/fddcc42422b2d3feb865c594e5db7664bb676f0b54fd907e88e51e0f4bd96ce3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:26.455733 systemd[1]: Started cri-containerd-146d51eb6ced2113bc8d87cd001aff9a6106e9c3789ac0541bb70b54bae8aac9.scope - libcontainer container 146d51eb6ced2113bc8d87cd001aff9a6106e9c3789ac0541bb70b54bae8aac9. Nov 5 14:57:26.458469 systemd[1]: Started cri-containerd-4de5b82f9ef673b1e100a3b156b8d3659536ea7045fa3dabd2ea5f4081fb56ed.scope - libcontainer container 4de5b82f9ef673b1e100a3b156b8d3659536ea7045fa3dabd2ea5f4081fb56ed. Nov 5 14:57:26.462041 systemd[1]: Started cri-containerd-6309ef84c46680c9df88e6069898463a1d87072d390bd00a6c75d042a4b00785.scope - libcontainer container 6309ef84c46680c9df88e6069898463a1d87072d390bd00a6c75d042a4b00785. Nov 5 14:57:26.497374 containerd[1592]: time="2025-11-05T14:57:26.497329352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:83075b38e5b456efeeb9ffcf6faa6b79,Namespace:kube-system,Attempt:0,} returns sandbox id \"146d51eb6ced2113bc8d87cd001aff9a6106e9c3789ac0541bb70b54bae8aac9\"" Nov 5 14:57:26.500593 kubelet[2376]: E1105 14:57:26.499909 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.501655 containerd[1592]: time="2025-11-05T14:57:26.501568040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4de5b82f9ef673b1e100a3b156b8d3659536ea7045fa3dabd2ea5f4081fb56ed\"" Nov 5 14:57:26.502998 kubelet[2376]: E1105 14:57:26.502975 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.504180 containerd[1592]: time="2025-11-05T14:57:26.504152351Z" level=info msg="CreateContainer within sandbox \"146d51eb6ced2113bc8d87cd001aff9a6106e9c3789ac0541bb70b54bae8aac9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 14:57:26.507255 containerd[1592]: time="2025-11-05T14:57:26.506538824Z" level=info msg="CreateContainer within sandbox \"4de5b82f9ef673b1e100a3b156b8d3659536ea7045fa3dabd2ea5f4081fb56ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 14:57:26.513439 containerd[1592]: time="2025-11-05T14:57:26.513402568Z" level=info msg="Container 5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:26.527460 containerd[1592]: time="2025-11-05T14:57:26.527398207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"6309ef84c46680c9df88e6069898463a1d87072d390bd00a6c75d042a4b00785\"" Nov 5 14:57:26.528203 kubelet[2376]: E1105 14:57:26.528175 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.530668 containerd[1592]: time="2025-11-05T14:57:26.530599629Z" level=info msg="Container e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:26.534208 containerd[1592]: time="2025-11-05T14:57:26.534159867Z" level=info msg="CreateContainer within sandbox \"6309ef84c46680c9df88e6069898463a1d87072d390bd00a6c75d042a4b00785\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 14:57:26.536193 containerd[1592]: time="2025-11-05T14:57:26.536161202Z" level=info msg="CreateContainer within sandbox \"4de5b82f9ef673b1e100a3b156b8d3659536ea7045fa3dabd2ea5f4081fb56ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54\"" Nov 5 14:57:26.537861 containerd[1592]: time="2025-11-05T14:57:26.537780602Z" level=info msg="StartContainer for \"5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54\"" Nov 5 14:57:26.538831 containerd[1592]: time="2025-11-05T14:57:26.538782812Z" level=info msg="connecting to shim 5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54" address="unix:///run/containerd/s/ad9c071ff18792a0efecc3b267829232c4159bc30b741b9a3cc62b85034cdaf1" protocol=ttrpc version=3 Nov 5 14:57:26.540602 containerd[1592]: time="2025-11-05T14:57:26.539914750Z" level=info msg="CreateContainer within sandbox \"146d51eb6ced2113bc8d87cd001aff9a6106e9c3789ac0541bb70b54bae8aac9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9\"" Nov 5 14:57:26.540602 containerd[1592]: time="2025-11-05T14:57:26.540253574Z" level=info msg="StartContainer for \"e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9\"" Nov 5 14:57:26.541261 containerd[1592]: time="2025-11-05T14:57:26.541232106Z" level=info msg="connecting to shim e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9" address="unix:///run/containerd/s/dafe9696a854a68f4a9e0452a91273a23e10e8982492911ffb907b74f9c04a7e" protocol=ttrpc version=3 Nov 5 14:57:26.541385 containerd[1592]: time="2025-11-05T14:57:26.541362676Z" level=info msg="Container 80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:26.551216 containerd[1592]: time="2025-11-05T14:57:26.551166542Z" level=info msg="CreateContainer within sandbox \"6309ef84c46680c9df88e6069898463a1d87072d390bd00a6c75d042a4b00785\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa\"" Nov 5 14:57:26.551725 containerd[1592]: time="2025-11-05T14:57:26.551697275Z" level=info msg="StartContainer for \"80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa\"" Nov 5 14:57:26.553092 containerd[1592]: time="2025-11-05T14:57:26.553061225Z" level=info msg="connecting to shim 80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa" address="unix:///run/containerd/s/fddcc42422b2d3feb865c594e5db7664bb676f0b54fd907e88e51e0f4bd96ce3" protocol=ttrpc version=3 Nov 5 14:57:26.559725 systemd[1]: Started cri-containerd-5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54.scope - libcontainer container 5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54. Nov 5 14:57:26.560733 systemd[1]: Started cri-containerd-e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9.scope - libcontainer container e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9. Nov 5 14:57:26.576742 systemd[1]: Started cri-containerd-80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa.scope - libcontainer container 80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa. Nov 5 14:57:26.609793 containerd[1592]: time="2025-11-05T14:57:26.609753161Z" level=info msg="StartContainer for \"5fcb488235ee014fcd8525af196cf31229964be18453726b4d1865a899042a54\" returns successfully" Nov 5 14:57:26.614863 containerd[1592]: time="2025-11-05T14:57:26.614733159Z" level=info msg="StartContainer for \"e38dec1c90e3a642ce360b7ac59aaf4c57839667a5ecdb5799c58f6d99c13bd9\" returns successfully" Nov 5 14:57:26.630957 containerd[1592]: time="2025-11-05T14:57:26.630853331Z" level=info msg="StartContainer for \"80117ee44e43e5b54aad63f484c7ebcd4a49263693e1626590cb549cd394fafa\" returns successfully" Nov 5 14:57:26.688832 kubelet[2376]: I1105 14:57:26.688792 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:57:26.690721 kubelet[2376]: E1105 14:57:26.690679 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Nov 5 14:57:26.775015 kubelet[2376]: E1105 14:57:26.774983 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:26.775122 kubelet[2376]: E1105 14:57:26.775105 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.775363 kubelet[2376]: E1105 14:57:26.775342 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:26.775443 kubelet[2376]: E1105 14:57:26.775427 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:26.777381 kubelet[2376]: E1105 14:57:26.777359 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:26.777483 kubelet[2376]: E1105 14:57:26.777467 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:27.492304 kubelet[2376]: I1105 14:57:27.492272 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:57:27.779181 kubelet[2376]: E1105 14:57:27.779033 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:27.779181 kubelet[2376]: E1105 14:57:27.779153 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:27.779383 kubelet[2376]: E1105 14:57:27.779361 2376 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:57:27.779612 kubelet[2376]: E1105 14:57:27.779472 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:28.277288 kubelet[2376]: E1105 14:57:28.277121 2376 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 14:57:28.453424 kubelet[2376]: I1105 14:57:28.453371 2376 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 14:57:28.548918 kubelet[2376]: I1105 14:57:28.548528 2376 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:28.553688 kubelet[2376]: E1105 14:57:28.553645 2376 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:28.553688 kubelet[2376]: I1105 14:57:28.553688 2376 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:28.555590 kubelet[2376]: E1105 14:57:28.555316 2376 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:28.555590 kubelet[2376]: I1105 14:57:28.555343 2376 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:28.557011 kubelet[2376]: E1105 14:57:28.556992 2376 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:28.742615 kubelet[2376]: I1105 14:57:28.742565 2376 apiserver.go:52] "Watching apiserver" Nov 5 14:57:28.748143 kubelet[2376]: I1105 14:57:28.748117 2376 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 14:57:28.779983 kubelet[2376]: I1105 14:57:28.779952 2376 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:28.782241 kubelet[2376]: E1105 14:57:28.782210 2376 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:28.782406 kubelet[2376]: E1105 14:57:28.782389 2376 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:30.326528 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-7.scope)... Nov 5 14:57:30.326544 systemd[1]: Reloading... Nov 5 14:57:30.408748 zram_generator::config[2707]: No configuration found. Nov 5 14:57:30.568887 systemd[1]: Reloading finished in 242 ms. Nov 5 14:57:30.602143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:30.614458 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 14:57:30.614713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:30.614771 systemd[1]: kubelet.service: Consumed 1.214s CPU time, 128.1M memory peak. Nov 5 14:57:30.616548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:57:30.779107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:57:30.785936 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 14:57:30.827569 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:57:30.827569 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 14:57:30.827569 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:57:30.828610 kubelet[2745]: I1105 14:57:30.828043 2745 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 14:57:30.833962 kubelet[2745]: I1105 14:57:30.833925 2745 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 14:57:30.833962 kubelet[2745]: I1105 14:57:30.833951 2745 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 14:57:30.834298 kubelet[2745]: I1105 14:57:30.834268 2745 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 14:57:30.835977 kubelet[2745]: I1105 14:57:30.835954 2745 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 14:57:30.838136 kubelet[2745]: I1105 14:57:30.838111 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 14:57:30.841563 kubelet[2745]: I1105 14:57:30.841543 2745 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 14:57:30.845416 kubelet[2745]: I1105 14:57:30.844191 2745 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 14:57:30.845416 kubelet[2745]: I1105 14:57:30.844389 2745 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 14:57:30.845416 kubelet[2745]: I1105 14:57:30.844412 2745 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 14:57:30.845416 kubelet[2745]: I1105 14:57:30.844547 2745 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 14:57:30.845622 kubelet[2745]: I1105 14:57:30.844554 2745 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 14:57:30.845622 kubelet[2745]: I1105 14:57:30.844617 2745 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:57:30.845622 kubelet[2745]: I1105 14:57:30.844748 2745 kubelet.go:480] "Attempting to sync node with API server" Nov 5 14:57:30.845622 kubelet[2745]: I1105 14:57:30.844761 2745 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 14:57:30.845622 kubelet[2745]: I1105 14:57:30.844781 2745 kubelet.go:386] "Adding apiserver pod source" Nov 5 14:57:30.845622 kubelet[2745]: I1105 14:57:30.844792 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 14:57:30.845874 kubelet[2745]: I1105 14:57:30.845849 2745 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 14:57:30.846426 kubelet[2745]: I1105 14:57:30.846403 2745 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 14:57:30.855393 kubelet[2745]: I1105 14:57:30.854827 2745 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 14:57:30.855541 kubelet[2745]: I1105 14:57:30.855527 2745 server.go:1289] "Started kubelet" Nov 5 14:57:30.857481 kubelet[2745]: I1105 14:57:30.857431 2745 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 14:57:30.858979 kubelet[2745]: I1105 14:57:30.858336 2745 server.go:317] "Adding debug handlers to kubelet server" Nov 5 14:57:30.860335 kubelet[2745]: I1105 14:57:30.860299 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 14:57:30.863105 kubelet[2745]: I1105 14:57:30.862258 2745 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 14:57:30.863105 kubelet[2745]: I1105 14:57:30.862346 2745 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 14:57:30.863105 kubelet[2745]: I1105 14:57:30.862444 2745 reconciler.go:26] "Reconciler: start to sync state" Nov 5 14:57:30.864469 kubelet[2745]: E1105 14:57:30.863992 2745 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 14:57:30.864761 kubelet[2745]: I1105 14:57:30.864743 2745 factory.go:223] Registration of the systemd container factory successfully Nov 5 14:57:30.864852 kubelet[2745]: I1105 14:57:30.864834 2745 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 14:57:30.865037 kubelet[2745]: I1105 14:57:30.864992 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 14:57:30.865263 kubelet[2745]: I1105 14:57:30.865247 2745 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 14:57:30.865938 kubelet[2745]: I1105 14:57:30.865891 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 14:57:30.866336 kubelet[2745]: I1105 14:57:30.866306 2745 factory.go:223] Registration of the containerd container factory successfully Nov 5 14:57:30.878840 kubelet[2745]: I1105 14:57:30.878776 2745 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 14:57:30.880256 kubelet[2745]: I1105 14:57:30.880230 2745 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 14:57:30.880256 kubelet[2745]: I1105 14:57:30.880253 2745 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 14:57:30.880352 kubelet[2745]: I1105 14:57:30.880271 2745 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 14:57:30.880352 kubelet[2745]: I1105 14:57:30.880278 2745 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 14:57:30.880352 kubelet[2745]: E1105 14:57:30.880319 2745 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 14:57:30.904118 kubelet[2745]: I1105 14:57:30.904088 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 14:57:30.904118 kubelet[2745]: I1105 14:57:30.904105 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 14:57:30.904118 kubelet[2745]: I1105 14:57:30.904124 2745 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:57:30.904284 kubelet[2745]: I1105 14:57:30.904244 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 14:57:30.904284 kubelet[2745]: I1105 14:57:30.904254 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 14:57:30.904284 kubelet[2745]: I1105 14:57:30.904270 2745 policy_none.go:49] "None policy: Start" Nov 5 14:57:30.904284 kubelet[2745]: I1105 14:57:30.904278 2745 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 14:57:30.904284 kubelet[2745]: I1105 14:57:30.904286 2745 state_mem.go:35] "Initializing new in-memory state store" Nov 5 14:57:30.904439 kubelet[2745]: I1105 14:57:30.904398 2745 state_mem.go:75] "Updated machine memory state" Nov 5 14:57:30.908167 kubelet[2745]: E1105 14:57:30.908052 2745 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 14:57:30.908551 kubelet[2745]: I1105 14:57:30.908532 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 14:57:30.908693 kubelet[2745]: I1105 14:57:30.908662 2745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 14:57:30.909057 kubelet[2745]: I1105 14:57:30.908928 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 14:57:30.909740 kubelet[2745]: E1105 14:57:30.909672 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 14:57:30.981968 kubelet[2745]: I1105 14:57:30.981928 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:30.982714 kubelet[2745]: I1105 14:57:30.982112 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:30.983356 kubelet[2745]: I1105 14:57:30.983318 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:31.017196 kubelet[2745]: I1105 14:57:31.017154 2745 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:57:31.026445 kubelet[2745]: I1105 14:57:31.026412 2745 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 14:57:31.026531 kubelet[2745]: I1105 14:57:31.026499 2745 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 14:57:31.064197 kubelet[2745]: I1105 14:57:31.064144 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83075b38e5b456efeeb9ffcf6faa6b79-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"83075b38e5b456efeeb9ffcf6faa6b79\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:31.064197 kubelet[2745]: I1105 14:57:31.064197 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83075b38e5b456efeeb9ffcf6faa6b79-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"83075b38e5b456efeeb9ffcf6faa6b79\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:31.064345 kubelet[2745]: I1105 14:57:31.064243 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:31.064345 kubelet[2745]: I1105 14:57:31.064273 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:31.064345 kubelet[2745]: I1105 14:57:31.064290 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:31.064345 kubelet[2745]: I1105 14:57:31.064305 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:31.064345 kubelet[2745]: I1105 14:57:31.064321 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83075b38e5b456efeeb9ffcf6faa6b79-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"83075b38e5b456efeeb9ffcf6faa6b79\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:31.064487 kubelet[2745]: I1105 14:57:31.064348 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:31.064487 kubelet[2745]: I1105 14:57:31.064363 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:57:31.288489 kubelet[2745]: E1105 14:57:31.288284 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:31.288489 kubelet[2745]: E1105 14:57:31.288335 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:31.288489 kubelet[2745]: E1105 14:57:31.288432 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:31.845877 kubelet[2745]: I1105 14:57:31.845829 2745 apiserver.go:52] "Watching apiserver" Nov 5 14:57:31.862882 kubelet[2745]: I1105 14:57:31.862855 2745 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 14:57:31.895557 kubelet[2745]: I1105 14:57:31.895448 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:31.896484 kubelet[2745]: E1105 14:57:31.895668 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:31.896484 kubelet[2745]: I1105 14:57:31.896225 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:31.902506 kubelet[2745]: E1105 14:57:31.902468 2745 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 14:57:31.905688 kubelet[2745]: E1105 14:57:31.903207 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:31.905688 kubelet[2745]: E1105 14:57:31.903295 2745 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 14:57:31.905688 kubelet[2745]: E1105 14:57:31.903418 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:31.931028 kubelet[2745]: I1105 14:57:31.930864 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9308418619999999 podStartE2EDuration="1.930841862s" podCreationTimestamp="2025-11-05 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:57:31.930213985 +0000 UTC m=+1.140768752" watchObservedRunningTime="2025-11-05 14:57:31.930841862 +0000 UTC m=+1.141396629" Nov 5 14:57:31.931028 kubelet[2745]: I1105 14:57:31.930980 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.930974572 podStartE2EDuration="1.930974572s" podCreationTimestamp="2025-11-05 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:57:31.91786084 +0000 UTC m=+1.128415607" watchObservedRunningTime="2025-11-05 14:57:31.930974572 +0000 UTC m=+1.141529339" Nov 5 14:57:31.962132 kubelet[2745]: I1105 14:57:31.961943 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9619262100000001 podStartE2EDuration="1.96192621s" podCreationTimestamp="2025-11-05 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:57:31.94034802 +0000 UTC m=+1.150902787" watchObservedRunningTime="2025-11-05 14:57:31.96192621 +0000 UTC m=+1.172480977" Nov 5 14:57:32.896382 kubelet[2745]: E1105 14:57:32.896352 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:32.896961 kubelet[2745]: E1105 14:57:32.896443 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:33.898091 kubelet[2745]: E1105 14:57:33.898033 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:33.899209 kubelet[2745]: E1105 14:57:33.899181 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:34.554681 kubelet[2745]: E1105 14:57:34.554644 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:37.071032 kubelet[2745]: I1105 14:57:37.071004 2745 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 14:57:37.071426 containerd[1592]: time="2025-11-05T14:57:37.071289812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 14:57:37.071729 kubelet[2745]: I1105 14:57:37.071444 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 14:57:37.682887 systemd[1]: Created slice kubepods-besteffort-podd275ed27_7920_46f8_9d18_93f822aa7731.slice - libcontainer container kubepods-besteffort-podd275ed27_7920_46f8_9d18_93f822aa7731.slice. Nov 5 14:57:37.700819 kubelet[2745]: I1105 14:57:37.700710 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kj5\" (UniqueName: \"kubernetes.io/projected/d275ed27-7920-46f8-9d18-93f822aa7731-kube-api-access-p4kj5\") pod \"kube-proxy-c29z8\" (UID: \"d275ed27-7920-46f8-9d18-93f822aa7731\") " pod="kube-system/kube-proxy-c29z8" Nov 5 14:57:37.700819 kubelet[2745]: I1105 14:57:37.700761 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d275ed27-7920-46f8-9d18-93f822aa7731-kube-proxy\") pod \"kube-proxy-c29z8\" (UID: \"d275ed27-7920-46f8-9d18-93f822aa7731\") " pod="kube-system/kube-proxy-c29z8" Nov 5 14:57:37.700819 kubelet[2745]: I1105 14:57:37.700780 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d275ed27-7920-46f8-9d18-93f822aa7731-lib-modules\") pod \"kube-proxy-c29z8\" (UID: \"d275ed27-7920-46f8-9d18-93f822aa7731\") " pod="kube-system/kube-proxy-c29z8" Nov 5 14:57:37.700819 kubelet[2745]: I1105 14:57:37.700799 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d275ed27-7920-46f8-9d18-93f822aa7731-xtables-lock\") pod \"kube-proxy-c29z8\" (UID: \"d275ed27-7920-46f8-9d18-93f822aa7731\") " pod="kube-system/kube-proxy-c29z8" Nov 5 14:57:37.994952 kubelet[2745]: E1105 14:57:37.994913 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:37.995479 containerd[1592]: time="2025-11-05T14:57:37.995410392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c29z8,Uid:d275ed27-7920-46f8-9d18-93f822aa7731,Namespace:kube-system,Attempt:0,}" Nov 5 14:57:38.017682 containerd[1592]: time="2025-11-05T14:57:38.017633630Z" level=info msg="connecting to shim 08d92021d84e96020eb50f5c058f58e41bbd8758e9b3d4848b357d7d93887d78" address="unix:///run/containerd/s/61ad9aef57426b16e8e34b5fd086f5f74f0146220cbd062e07807afca247d81a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:38.041007 systemd[1]: Started cri-containerd-08d92021d84e96020eb50f5c058f58e41bbd8758e9b3d4848b357d7d93887d78.scope - libcontainer container 08d92021d84e96020eb50f5c058f58e41bbd8758e9b3d4848b357d7d93887d78. Nov 5 14:57:38.077160 containerd[1592]: time="2025-11-05T14:57:38.077120594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c29z8,Uid:d275ed27-7920-46f8-9d18-93f822aa7731,Namespace:kube-system,Attempt:0,} returns sandbox id \"08d92021d84e96020eb50f5c058f58e41bbd8758e9b3d4848b357d7d93887d78\"" Nov 5 14:57:38.079319 kubelet[2745]: E1105 14:57:38.079288 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:38.086728 containerd[1592]: time="2025-11-05T14:57:38.086665787Z" level=info msg="CreateContainer within sandbox \"08d92021d84e96020eb50f5c058f58e41bbd8758e9b3d4848b357d7d93887d78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 14:57:38.102590 containerd[1592]: time="2025-11-05T14:57:38.101492221Z" level=info msg="Container a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:38.111556 containerd[1592]: time="2025-11-05T14:57:38.111489285Z" level=info msg="CreateContainer within sandbox \"08d92021d84e96020eb50f5c058f58e41bbd8758e9b3d4848b357d7d93887d78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4\"" Nov 5 14:57:38.112277 containerd[1592]: time="2025-11-05T14:57:38.112225325Z" level=info msg="StartContainer for \"a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4\"" Nov 5 14:57:38.114376 containerd[1592]: time="2025-11-05T14:57:38.113740112Z" level=info msg="connecting to shim a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4" address="unix:///run/containerd/s/61ad9aef57426b16e8e34b5fd086f5f74f0146220cbd062e07807afca247d81a" protocol=ttrpc version=3 Nov 5 14:57:38.131746 systemd[1]: Started cri-containerd-a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4.scope - libcontainer container a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4. Nov 5 14:57:38.167412 containerd[1592]: time="2025-11-05T14:57:38.167311295Z" level=info msg="StartContainer for \"a1cc5091961c56bd6f2f9ca5515df8b8c3ed973e7925b0aa1a5c1b878748c0d4\" returns successfully" Nov 5 14:57:38.249125 systemd[1]: Created slice kubepods-besteffort-pod7da499ee_5b5d_4cdf_b850_81874a4732a7.slice - libcontainer container kubepods-besteffort-pod7da499ee_5b5d_4cdf_b850_81874a4732a7.slice. Nov 5 14:57:38.305720 kubelet[2745]: I1105 14:57:38.305679 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dp77\" (UniqueName: \"kubernetes.io/projected/7da499ee-5b5d-4cdf-b850-81874a4732a7-kube-api-access-6dp77\") pod \"tigera-operator-7dcd859c48-qpvsf\" (UID: \"7da499ee-5b5d-4cdf-b850-81874a4732a7\") " pod="tigera-operator/tigera-operator-7dcd859c48-qpvsf" Nov 5 14:57:38.305924 kubelet[2745]: I1105 14:57:38.305883 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7da499ee-5b5d-4cdf-b850-81874a4732a7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-qpvsf\" (UID: \"7da499ee-5b5d-4cdf-b850-81874a4732a7\") " pod="tigera-operator/tigera-operator-7dcd859c48-qpvsf" Nov 5 14:57:38.553699 containerd[1592]: time="2025-11-05T14:57:38.553327613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qpvsf,Uid:7da499ee-5b5d-4cdf-b850-81874a4732a7,Namespace:tigera-operator,Attempt:0,}" Nov 5 14:57:38.570966 containerd[1592]: time="2025-11-05T14:57:38.570900490Z" level=info msg="connecting to shim 1626e511371d8667be809363322e4df92f3ba2ffa01577714c03d3e42eb3f1cb" address="unix:///run/containerd/s/e771dbe9cc6ab8ff8fe0c4eecbe38a05d30d44bf1cec8d5f41a1b663c73d27ab" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:38.593762 systemd[1]: Started cri-containerd-1626e511371d8667be809363322e4df92f3ba2ffa01577714c03d3e42eb3f1cb.scope - libcontainer container 1626e511371d8667be809363322e4df92f3ba2ffa01577714c03d3e42eb3f1cb. Nov 5 14:57:38.623894 containerd[1592]: time="2025-11-05T14:57:38.623851022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-qpvsf,Uid:7da499ee-5b5d-4cdf-b850-81874a4732a7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1626e511371d8667be809363322e4df92f3ba2ffa01577714c03d3e42eb3f1cb\"" Nov 5 14:57:38.625697 containerd[1592]: time="2025-11-05T14:57:38.625388502Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 14:57:38.909661 kubelet[2745]: E1105 14:57:38.909542 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:38.918769 kubelet[2745]: I1105 14:57:38.918706 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c29z8" podStartSLOduration=1.918690169 podStartE2EDuration="1.918690169s" podCreationTimestamp="2025-11-05 14:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:57:38.918474439 +0000 UTC m=+8.129029206" watchObservedRunningTime="2025-11-05 14:57:38.918690169 +0000 UTC m=+8.129244896" Nov 5 14:57:40.088740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670078904.mount: Deactivated successfully. Nov 5 14:57:40.465601 containerd[1592]: time="2025-11-05T14:57:40.465466597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:40.466078 containerd[1592]: time="2025-11-05T14:57:40.466058034Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 14:57:40.468014 containerd[1592]: time="2025-11-05T14:57:40.466862185Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:40.469398 containerd[1592]: time="2025-11-05T14:57:40.469359684Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:40.470113 containerd[1592]: time="2025-11-05T14:57:40.470089596Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.844664712s" Nov 5 14:57:40.470233 containerd[1592]: time="2025-11-05T14:57:40.470203697Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 14:57:40.477110 containerd[1592]: time="2025-11-05T14:57:40.477073420Z" level=info msg="CreateContainer within sandbox \"1626e511371d8667be809363322e4df92f3ba2ffa01577714c03d3e42eb3f1cb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 14:57:40.485362 containerd[1592]: time="2025-11-05T14:57:40.485312998Z" level=info msg="Container 8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:40.490686 containerd[1592]: time="2025-11-05T14:57:40.490651620Z" level=info msg="CreateContainer within sandbox \"1626e511371d8667be809363322e4df92f3ba2ffa01577714c03d3e42eb3f1cb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501\"" Nov 5 14:57:40.493649 containerd[1592]: time="2025-11-05T14:57:40.493573227Z" level=info msg="StartContainer for \"8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501\"" Nov 5 14:57:40.494624 containerd[1592]: time="2025-11-05T14:57:40.494541666Z" level=info msg="connecting to shim 8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501" address="unix:///run/containerd/s/e771dbe9cc6ab8ff8fe0c4eecbe38a05d30d44bf1cec8d5f41a1b663c73d27ab" protocol=ttrpc version=3 Nov 5 14:57:40.534737 systemd[1]: Started cri-containerd-8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501.scope - libcontainer container 8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501. Nov 5 14:57:40.567135 containerd[1592]: time="2025-11-05T14:57:40.567098849Z" level=info msg="StartContainer for \"8b55c7afe4d4c484943d05215bb808cae26139aa2b6ef6b66662f96b98535501\" returns successfully" Nov 5 14:57:41.674620 kubelet[2745]: E1105 14:57:41.674560 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:41.693141 kubelet[2745]: I1105 14:57:41.693073 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-qpvsf" podStartSLOduration=1.845080926 podStartE2EDuration="3.693058114s" podCreationTimestamp="2025-11-05 14:57:38 +0000 UTC" firstStartedPulling="2025-11-05 14:57:38.625093846 +0000 UTC m=+7.835648613" lastFinishedPulling="2025-11-05 14:57:40.473071034 +0000 UTC m=+9.683625801" observedRunningTime="2025-11-05 14:57:40.92618514 +0000 UTC m=+10.136739907" watchObservedRunningTime="2025-11-05 14:57:41.693058114 +0000 UTC m=+10.903612921" Nov 5 14:57:41.917836 kubelet[2745]: E1105 14:57:41.917491 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:42.778535 kubelet[2745]: E1105 14:57:42.778494 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:44.562165 kubelet[2745]: E1105 14:57:44.562125 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:45.764605 sudo[1807]: pam_unix(sudo:session): session closed for user root Nov 5 14:57:45.766594 sshd[1806]: Connection closed by 10.0.0.1 port 42788 Nov 5 14:57:45.767891 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Nov 5 14:57:45.771935 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:42788.service: Deactivated successfully. Nov 5 14:57:45.775240 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 14:57:45.775421 systemd[1]: session-7.scope: Consumed 6.018s CPU time, 210.8M memory peak. Nov 5 14:57:45.777464 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Nov 5 14:57:45.778924 systemd-logind[1568]: Removed session 7. Nov 5 14:57:46.859161 update_engine[1576]: I20251105 14:57:46.858640 1576 update_attempter.cc:509] Updating boot flags... Nov 5 14:57:53.436807 systemd[1]: Created slice kubepods-besteffort-pode6a62fc5_c620_4704_8f57_123ab26922b0.slice - libcontainer container kubepods-besteffort-pode6a62fc5_c620_4704_8f57_123ab26922b0.slice. Nov 5 14:57:53.507848 kubelet[2745]: I1105 14:57:53.507786 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a62fc5-c620-4704-8f57-123ab26922b0-tigera-ca-bundle\") pod \"calico-typha-77fdc5c9b8-hdh6m\" (UID: \"e6a62fc5-c620-4704-8f57-123ab26922b0\") " pod="calico-system/calico-typha-77fdc5c9b8-hdh6m" Nov 5 14:57:53.507848 kubelet[2745]: I1105 14:57:53.507841 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e6a62fc5-c620-4704-8f57-123ab26922b0-typha-certs\") pod \"calico-typha-77fdc5c9b8-hdh6m\" (UID: \"e6a62fc5-c620-4704-8f57-123ab26922b0\") " pod="calico-system/calico-typha-77fdc5c9b8-hdh6m" Nov 5 14:57:53.508207 kubelet[2745]: I1105 14:57:53.507894 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqhxs\" (UniqueName: \"kubernetes.io/projected/e6a62fc5-c620-4704-8f57-123ab26922b0-kube-api-access-kqhxs\") pod \"calico-typha-77fdc5c9b8-hdh6m\" (UID: \"e6a62fc5-c620-4704-8f57-123ab26922b0\") " pod="calico-system/calico-typha-77fdc5c9b8-hdh6m" Nov 5 14:57:53.594048 systemd[1]: Created slice kubepods-besteffort-pode85e6a37_b029_4a33_8376_f43d3f18e1f6.slice - libcontainer container kubepods-besteffort-pode85e6a37_b029_4a33_8376_f43d3f18e1f6.slice. Nov 5 14:57:53.608148 kubelet[2745]: I1105 14:57:53.608085 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-cni-bin-dir\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608148 kubelet[2745]: I1105 14:57:53.608145 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-cni-net-dir\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608290 kubelet[2745]: I1105 14:57:53.608162 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-cni-log-dir\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608290 kubelet[2745]: I1105 14:57:53.608178 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e85e6a37-b029-4a33-8376-f43d3f18e1f6-node-certs\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608290 kubelet[2745]: I1105 14:57:53.608193 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-policysync\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608290 kubelet[2745]: I1105 14:57:53.608207 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-lib-modules\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608290 kubelet[2745]: I1105 14:57:53.608221 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e85e6a37-b029-4a33-8376-f43d3f18e1f6-tigera-ca-bundle\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608403 kubelet[2745]: I1105 14:57:53.608236 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-var-run-calico\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608403 kubelet[2745]: I1105 14:57:53.608264 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-var-lib-calico\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608403 kubelet[2745]: I1105 14:57:53.608279 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-xtables-lock\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608403 kubelet[2745]: I1105 14:57:53.608316 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e85e6a37-b029-4a33-8376-f43d3f18e1f6-flexvol-driver-host\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.608403 kubelet[2745]: I1105 14:57:53.608334 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmbr\" (UniqueName: \"kubernetes.io/projected/e85e6a37-b029-4a33-8376-f43d3f18e1f6-kube-api-access-tjmbr\") pod \"calico-node-ztmpt\" (UID: \"e85e6a37-b029-4a33-8376-f43d3f18e1f6\") " pod="calico-system/calico-node-ztmpt" Nov 5 14:57:53.711969 kubelet[2745]: E1105 14:57:53.711774 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.711969 kubelet[2745]: W1105 14:57:53.711890 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.711969 kubelet[2745]: E1105 14:57:53.711916 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.713609 kubelet[2745]: E1105 14:57:53.713566 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.713609 kubelet[2745]: W1105 14:57:53.713594 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.713609 kubelet[2745]: E1105 14:57:53.713609 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.728104 kubelet[2745]: E1105 14:57:53.728085 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.728104 kubelet[2745]: W1105 14:57:53.728100 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.728206 kubelet[2745]: E1105 14:57:53.728114 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.741336 kubelet[2745]: E1105 14:57:53.741253 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:53.741856 containerd[1592]: time="2025-11-05T14:57:53.741811686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77fdc5c9b8-hdh6m,Uid:e6a62fc5-c620-4704-8f57-123ab26922b0,Namespace:calico-system,Attempt:0,}" Nov 5 14:57:53.777798 containerd[1592]: time="2025-11-05T14:57:53.777752438Z" level=info msg="connecting to shim 3a1972a5e9d432121e47cf9e99699276f30c112de2628717d46cb235ac9516f6" address="unix:///run/containerd/s/97b6135eca6b34ea42acdd49f847fe49f1f534d9669f9511788bcba0892c2258" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:53.797176 kubelet[2745]: E1105 14:57:53.797115 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:57:53.797752 kubelet[2745]: E1105 14:57:53.797729 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.798715 kubelet[2745]: W1105 14:57:53.797747 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.798715 kubelet[2745]: E1105 14:57:53.797781 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.798715 kubelet[2745]: E1105 14:57:53.797959 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.803170 kubelet[2745]: W1105 14:57:53.797969 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.803170 kubelet[2745]: E1105 14:57:53.803166 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.803744 kubelet[2745]: E1105 14:57:53.803546 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.803744 kubelet[2745]: W1105 14:57:53.803556 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.803744 kubelet[2745]: E1105 14:57:53.803565 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.803744 kubelet[2745]: E1105 14:57:53.803723 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.803744 kubelet[2745]: W1105 14:57:53.803732 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.803744 kubelet[2745]: E1105 14:57:53.803748 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.805622 kubelet[2745]: E1105 14:57:53.804651 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.805622 kubelet[2745]: W1105 14:57:53.804668 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.805622 kubelet[2745]: E1105 14:57:53.804680 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.805622 kubelet[2745]: E1105 14:57:53.804853 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.805622 kubelet[2745]: W1105 14:57:53.804860 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.805622 kubelet[2745]: E1105 14:57:53.804867 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.805622 kubelet[2745]: E1105 14:57:53.805568 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.805622 kubelet[2745]: W1105 14:57:53.805586 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.805622 kubelet[2745]: E1105 14:57:53.805596 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.805846 kubelet[2745]: E1105 14:57:53.805736 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.805846 kubelet[2745]: W1105 14:57:53.805743 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.805846 kubelet[2745]: E1105 14:57:53.805751 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.806604 kubelet[2745]: E1105 14:57:53.806164 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.806604 kubelet[2745]: W1105 14:57:53.806176 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.806604 kubelet[2745]: E1105 14:57:53.806185 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.806604 kubelet[2745]: E1105 14:57:53.806403 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.806604 kubelet[2745]: W1105 14:57:53.806412 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.806604 kubelet[2745]: E1105 14:57:53.806421 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.806604 kubelet[2745]: E1105 14:57:53.806555 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.806604 kubelet[2745]: W1105 14:57:53.806562 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.806604 kubelet[2745]: E1105 14:57:53.806569 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.807010 systemd[1]: Started cri-containerd-3a1972a5e9d432121e47cf9e99699276f30c112de2628717d46cb235ac9516f6.scope - libcontainer container 3a1972a5e9d432121e47cf9e99699276f30c112de2628717d46cb235ac9516f6. Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.807663 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.808673 kubelet[2745]: W1105 14:57:53.807680 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.807692 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.807929 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.808673 kubelet[2745]: W1105 14:57:53.807961 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.807973 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.808127 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.808673 kubelet[2745]: W1105 14:57:53.808135 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.808145 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.808673 kubelet[2745]: E1105 14:57:53.808681 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.808912 kubelet[2745]: W1105 14:57:53.808690 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.808912 kubelet[2745]: E1105 14:57:53.808699 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.808912 kubelet[2745]: E1105 14:57:53.808847 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.808912 kubelet[2745]: W1105 14:57:53.808854 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.808912 kubelet[2745]: E1105 14:57:53.808862 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.809105 kubelet[2745]: E1105 14:57:53.809036 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.809105 kubelet[2745]: W1105 14:57:53.809047 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.809105 kubelet[2745]: E1105 14:57:53.809055 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.809615 kubelet[2745]: E1105 14:57:53.809180 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.809615 kubelet[2745]: W1105 14:57:53.809191 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.809615 kubelet[2745]: E1105 14:57:53.809201 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.809615 kubelet[2745]: E1105 14:57:53.809325 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.809615 kubelet[2745]: W1105 14:57:53.809332 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.809615 kubelet[2745]: E1105 14:57:53.809339 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.809615 kubelet[2745]: E1105 14:57:53.809463 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.809615 kubelet[2745]: W1105 14:57:53.809471 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.809615 kubelet[2745]: E1105 14:57:53.809478 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.810958 kubelet[2745]: E1105 14:57:53.810933 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.810958 kubelet[2745]: W1105 14:57:53.810949 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.810958 kubelet[2745]: E1105 14:57:53.810963 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.811087 kubelet[2745]: I1105 14:57:53.810998 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/72dac233-4b2b-4265-b846-29435de8b196-socket-dir\") pod \"csi-node-driver-tm97f\" (UID: \"72dac233-4b2b-4265-b846-29435de8b196\") " pod="calico-system/csi-node-driver-tm97f" Nov 5 14:57:53.811424 kubelet[2745]: E1105 14:57:53.811172 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.811424 kubelet[2745]: W1105 14:57:53.811185 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.811424 kubelet[2745]: E1105 14:57:53.811292 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.811424 kubelet[2745]: I1105 14:57:53.811333 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/72dac233-4b2b-4265-b846-29435de8b196-varrun\") pod \"csi-node-driver-tm97f\" (UID: \"72dac233-4b2b-4265-b846-29435de8b196\") " pod="calico-system/csi-node-driver-tm97f" Nov 5 14:57:53.812057 kubelet[2745]: E1105 14:57:53.811686 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.812057 kubelet[2745]: W1105 14:57:53.811698 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.812057 kubelet[2745]: E1105 14:57:53.811709 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.812057 kubelet[2745]: I1105 14:57:53.811732 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6w6m\" (UniqueName: \"kubernetes.io/projected/72dac233-4b2b-4265-b846-29435de8b196-kube-api-access-n6w6m\") pod \"csi-node-driver-tm97f\" (UID: \"72dac233-4b2b-4265-b846-29435de8b196\") " pod="calico-system/csi-node-driver-tm97f" Nov 5 14:57:53.813496 kubelet[2745]: E1105 14:57:53.813268 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.814608 kubelet[2745]: W1105 14:57:53.813567 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.814716 kubelet[2745]: E1105 14:57:53.814682 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.815073 kubelet[2745]: E1105 14:57:53.815057 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.815632 kubelet[2745]: W1105 14:57:53.815139 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.815632 kubelet[2745]: E1105 14:57:53.815156 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.816695 kubelet[2745]: E1105 14:57:53.816675 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.816852 kubelet[2745]: W1105 14:57:53.816774 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.817547 kubelet[2745]: E1105 14:57:53.817518 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.819021 kubelet[2745]: E1105 14:57:53.818456 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.819144 kubelet[2745]: W1105 14:57:53.819125 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.819223 kubelet[2745]: E1105 14:57:53.819212 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.819735 kubelet[2745]: E1105 14:57:53.819553 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.820249 kubelet[2745]: W1105 14:57:53.820215 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.820249 kubelet[2745]: E1105 14:57:53.820278 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.821231 kubelet[2745]: I1105 14:57:53.821211 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72dac233-4b2b-4265-b846-29435de8b196-kubelet-dir\") pod \"csi-node-driver-tm97f\" (UID: \"72dac233-4b2b-4265-b846-29435de8b196\") " pod="calico-system/csi-node-driver-tm97f" Nov 5 14:57:53.821901 kubelet[2745]: E1105 14:57:53.821883 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.822099 kubelet[2745]: W1105 14:57:53.821964 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.822099 kubelet[2745]: E1105 14:57:53.821982 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.822445 kubelet[2745]: E1105 14:57:53.822421 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.822508 kubelet[2745]: W1105 14:57:53.822495 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.822557 kubelet[2745]: E1105 14:57:53.822548 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.822794 kubelet[2745]: E1105 14:57:53.822782 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.823527 kubelet[2745]: W1105 14:57:53.822879 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.823527 kubelet[2745]: E1105 14:57:53.823494 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.824601 kubelet[2745]: E1105 14:57:53.823985 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.824601 kubelet[2745]: W1105 14:57:53.824010 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.824601 kubelet[2745]: E1105 14:57:53.824027 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.825197 kubelet[2745]: E1105 14:57:53.825168 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.825483 kubelet[2745]: W1105 14:57:53.825298 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.825483 kubelet[2745]: E1105 14:57:53.825318 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.825879 kubelet[2745]: I1105 14:57:53.825732 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/72dac233-4b2b-4265-b846-29435de8b196-registration-dir\") pod \"csi-node-driver-tm97f\" (UID: \"72dac233-4b2b-4265-b846-29435de8b196\") " pod="calico-system/csi-node-driver-tm97f" Nov 5 14:57:53.826072 kubelet[2745]: E1105 14:57:53.826057 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.826276 kubelet[2745]: W1105 14:57:53.826149 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.826276 kubelet[2745]: E1105 14:57:53.826168 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.826414 kubelet[2745]: E1105 14:57:53.826401 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.826484 kubelet[2745]: W1105 14:57:53.826473 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.826707 kubelet[2745]: E1105 14:57:53.826657 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.856339 containerd[1592]: time="2025-11-05T14:57:53.856302802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77fdc5c9b8-hdh6m,Uid:e6a62fc5-c620-4704-8f57-123ab26922b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a1972a5e9d432121e47cf9e99699276f30c112de2628717d46cb235ac9516f6\"" Nov 5 14:57:53.857048 kubelet[2745]: E1105 14:57:53.857019 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:53.858323 containerd[1592]: time="2025-11-05T14:57:53.858293598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 14:57:53.896651 kubelet[2745]: E1105 14:57:53.896607 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:53.897280 containerd[1592]: time="2025-11-05T14:57:53.897238748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ztmpt,Uid:e85e6a37-b029-4a33-8376-f43d3f18e1f6,Namespace:calico-system,Attempt:0,}" Nov 5 14:57:53.927285 kubelet[2745]: E1105 14:57:53.927249 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.927285 kubelet[2745]: W1105 14:57:53.927275 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.927285 kubelet[2745]: E1105 14:57:53.927296 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928019 kubelet[2745]: E1105 14:57:53.927494 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928019 kubelet[2745]: W1105 14:57:53.927502 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928019 kubelet[2745]: E1105 14:57:53.927511 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928019 kubelet[2745]: E1105 14:57:53.927689 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928019 kubelet[2745]: W1105 14:57:53.927697 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928019 kubelet[2745]: E1105 14:57:53.927705 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928019 kubelet[2745]: E1105 14:57:53.927862 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928019 kubelet[2745]: W1105 14:57:53.927870 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928019 kubelet[2745]: E1105 14:57:53.927892 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928200 kubelet[2745]: E1105 14:57:53.928058 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928200 kubelet[2745]: W1105 14:57:53.928066 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928200 kubelet[2745]: E1105 14:57:53.928073 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928281 kubelet[2745]: E1105 14:57:53.928231 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928281 kubelet[2745]: W1105 14:57:53.928248 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928281 kubelet[2745]: E1105 14:57:53.928260 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928471 kubelet[2745]: E1105 14:57:53.928400 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928471 kubelet[2745]: W1105 14:57:53.928410 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928471 kubelet[2745]: E1105 14:57:53.928427 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928654 kubelet[2745]: E1105 14:57:53.928548 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928654 kubelet[2745]: W1105 14:57:53.928556 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928654 kubelet[2745]: E1105 14:57:53.928563 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928746 kubelet[2745]: E1105 14:57:53.928730 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928746 kubelet[2745]: W1105 14:57:53.928739 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928746 kubelet[2745]: E1105 14:57:53.928746 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.928982 kubelet[2745]: E1105 14:57:53.928893 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.928982 kubelet[2745]: W1105 14:57:53.928901 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.928982 kubelet[2745]: E1105 14:57:53.928909 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.929132 kubelet[2745]: E1105 14:57:53.929116 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.929132 kubelet[2745]: W1105 14:57:53.929132 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.929210 kubelet[2745]: E1105 14:57:53.929141 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.929293 kubelet[2745]: E1105 14:57:53.929278 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.929293 kubelet[2745]: W1105 14:57:53.929290 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.929348 kubelet[2745]: E1105 14:57:53.929298 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.929525 kubelet[2745]: E1105 14:57:53.929508 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.929525 kubelet[2745]: W1105 14:57:53.929519 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.929601 kubelet[2745]: E1105 14:57:53.929537 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.929721 kubelet[2745]: E1105 14:57:53.929704 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.929721 kubelet[2745]: W1105 14:57:53.929715 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.929769 kubelet[2745]: E1105 14:57:53.929727 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.929858 kubelet[2745]: E1105 14:57:53.929844 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.929858 kubelet[2745]: W1105 14:57:53.929854 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.929901 kubelet[2745]: E1105 14:57:53.929862 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.930078 kubelet[2745]: E1105 14:57:53.930062 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.930078 kubelet[2745]: W1105 14:57:53.930074 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.930130 kubelet[2745]: E1105 14:57:53.930084 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.930319 kubelet[2745]: E1105 14:57:53.930258 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.930319 kubelet[2745]: W1105 14:57:53.930269 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.930319 kubelet[2745]: E1105 14:57:53.930277 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.930407 kubelet[2745]: E1105 14:57:53.930401 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.930629 kubelet[2745]: W1105 14:57:53.930409 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.930629 kubelet[2745]: E1105 14:57:53.930416 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.930629 kubelet[2745]: E1105 14:57:53.930535 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.930629 kubelet[2745]: W1105 14:57:53.930542 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.930629 kubelet[2745]: E1105 14:57:53.930549 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.931315 kubelet[2745]: E1105 14:57:53.930706 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.931315 kubelet[2745]: W1105 14:57:53.930714 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.931315 kubelet[2745]: E1105 14:57:53.930721 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.931315 kubelet[2745]: E1105 14:57:53.930932 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.931315 kubelet[2745]: W1105 14:57:53.930946 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.931315 kubelet[2745]: E1105 14:57:53.930960 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.931315 kubelet[2745]: E1105 14:57:53.931153 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.931315 kubelet[2745]: W1105 14:57:53.931161 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.931315 kubelet[2745]: E1105 14:57:53.931170 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.931671 kubelet[2745]: E1105 14:57:53.931653 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.931671 kubelet[2745]: W1105 14:57:53.931668 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.931724 kubelet[2745]: E1105 14:57:53.931681 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.931899 kubelet[2745]: E1105 14:57:53.931880 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.931899 kubelet[2745]: W1105 14:57:53.931892 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.931899 kubelet[2745]: E1105 14:57:53.931900 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.932164 kubelet[2745]: E1105 14:57:53.932147 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.932164 kubelet[2745]: W1105 14:57:53.932161 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.932253 kubelet[2745]: E1105 14:57:53.932172 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.942135 containerd[1592]: time="2025-11-05T14:57:53.942093027Z" level=info msg="connecting to shim c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c" address="unix:///run/containerd/s/4be17a40e64d84a187af0479cdfe9b60354e1daee32f196b035636d5a0d59417" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:57:53.944587 kubelet[2745]: E1105 14:57:53.944540 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:53.944587 kubelet[2745]: W1105 14:57:53.944556 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:53.944587 kubelet[2745]: E1105 14:57:53.944570 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:53.975826 systemd[1]: Started cri-containerd-c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c.scope - libcontainer container c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c. Nov 5 14:57:54.000693 containerd[1592]: time="2025-11-05T14:57:54.000651572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ztmpt,Uid:e85e6a37-b029-4a33-8376-f43d3f18e1f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\"" Nov 5 14:57:54.002394 kubelet[2745]: E1105 14:57:54.002342 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:55.098140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182305186.mount: Deactivated successfully. Nov 5 14:57:55.561724 containerd[1592]: time="2025-11-05T14:57:55.561678962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:55.562876 containerd[1592]: time="2025-11-05T14:57:55.562719188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 14:57:55.563961 containerd[1592]: time="2025-11-05T14:57:55.563932858Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:55.566148 containerd[1592]: time="2025-11-05T14:57:55.566100212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:55.566881 containerd[1592]: time="2025-11-05T14:57:55.566849323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.708521036s" Nov 5 14:57:55.566919 containerd[1592]: time="2025-11-05T14:57:55.566886933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 14:57:55.568186 containerd[1592]: time="2025-11-05T14:57:55.568163539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 14:57:55.588958 containerd[1592]: time="2025-11-05T14:57:55.588914199Z" level=info msg="CreateContainer within sandbox \"3a1972a5e9d432121e47cf9e99699276f30c112de2628717d46cb235ac9516f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 14:57:55.595648 containerd[1592]: time="2025-11-05T14:57:55.595054207Z" level=info msg="Container 6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:55.602191 containerd[1592]: time="2025-11-05T14:57:55.601960691Z" level=info msg="CreateContainer within sandbox \"3a1972a5e9d432121e47cf9e99699276f30c112de2628717d46cb235ac9516f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4\"" Nov 5 14:57:55.602431 containerd[1592]: time="2025-11-05T14:57:55.602407965Z" level=info msg="StartContainer for \"6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4\"" Nov 5 14:57:55.603559 containerd[1592]: time="2025-11-05T14:57:55.603415583Z" level=info msg="connecting to shim 6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4" address="unix:///run/containerd/s/97b6135eca6b34ea42acdd49f847fe49f1f534d9669f9511788bcba0892c2258" protocol=ttrpc version=3 Nov 5 14:57:55.629772 systemd[1]: Started cri-containerd-6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4.scope - libcontainer container 6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4. Nov 5 14:57:55.667614 containerd[1592]: time="2025-11-05T14:57:55.667478506Z" level=info msg="StartContainer for \"6f787c2dfa0dc0ce727fefa68cb194f0c5ee3f25f9e483690cd29750c063efd4\" returns successfully" Nov 5 14:57:55.881060 kubelet[2745]: E1105 14:57:55.880914 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:57:55.954625 kubelet[2745]: E1105 14:57:55.954377 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:55.964981 kubelet[2745]: I1105 14:57:55.964913 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77fdc5c9b8-hdh6m" podStartSLOduration=1.254828471 podStartE2EDuration="2.964897792s" podCreationTimestamp="2025-11-05 14:57:53 +0000 UTC" firstStartedPulling="2025-11-05 14:57:53.857877202 +0000 UTC m=+23.068431969" lastFinishedPulling="2025-11-05 14:57:55.567946523 +0000 UTC m=+24.778501290" observedRunningTime="2025-11-05 14:57:55.964379219 +0000 UTC m=+25.174933986" watchObservedRunningTime="2025-11-05 14:57:55.964897792 +0000 UTC m=+25.175452559" Nov 5 14:57:56.025681 kubelet[2745]: E1105 14:57:56.025639 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.025681 kubelet[2745]: W1105 14:57:56.025664 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.025681 kubelet[2745]: E1105 14:57:56.025684 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.025887 kubelet[2745]: E1105 14:57:56.025867 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.025933 kubelet[2745]: W1105 14:57:56.025877 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.025960 kubelet[2745]: E1105 14:57:56.025934 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.026114 kubelet[2745]: E1105 14:57:56.026088 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.026114 kubelet[2745]: W1105 14:57:56.026098 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.026114 kubelet[2745]: E1105 14:57:56.026106 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.026260 kubelet[2745]: E1105 14:57:56.026236 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.026260 kubelet[2745]: W1105 14:57:56.026246 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.026260 kubelet[2745]: E1105 14:57:56.026254 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.026415 kubelet[2745]: E1105 14:57:56.026392 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.026415 kubelet[2745]: W1105 14:57:56.026404 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.026415 kubelet[2745]: E1105 14:57:56.026412 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.026548 kubelet[2745]: E1105 14:57:56.026538 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.026548 kubelet[2745]: W1105 14:57:56.026547 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.026603 kubelet[2745]: E1105 14:57:56.026555 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.026719 kubelet[2745]: E1105 14:57:56.026707 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.026719 kubelet[2745]: W1105 14:57:56.026717 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.026770 kubelet[2745]: E1105 14:57:56.026725 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.026865 kubelet[2745]: E1105 14:57:56.026852 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.026865 kubelet[2745]: W1105 14:57:56.026862 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.026919 kubelet[2745]: E1105 14:57:56.026870 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027011 kubelet[2745]: E1105 14:57:56.027000 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027011 kubelet[2745]: W1105 14:57:56.027010 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027058 kubelet[2745]: E1105 14:57:56.027018 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027140 kubelet[2745]: E1105 14:57:56.027130 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027163 kubelet[2745]: W1105 14:57:56.027139 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027163 kubelet[2745]: E1105 14:57:56.027147 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027277 kubelet[2745]: E1105 14:57:56.027267 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027303 kubelet[2745]: W1105 14:57:56.027277 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027303 kubelet[2745]: E1105 14:57:56.027285 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027411 kubelet[2745]: E1105 14:57:56.027401 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027434 kubelet[2745]: W1105 14:57:56.027410 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027434 kubelet[2745]: E1105 14:57:56.027417 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027556 kubelet[2745]: E1105 14:57:56.027547 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027594 kubelet[2745]: W1105 14:57:56.027556 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027594 kubelet[2745]: E1105 14:57:56.027563 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027723 kubelet[2745]: E1105 14:57:56.027712 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027723 kubelet[2745]: W1105 14:57:56.027722 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027767 kubelet[2745]: E1105 14:57:56.027739 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.027854 kubelet[2745]: E1105 14:57:56.027844 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.027882 kubelet[2745]: W1105 14:57:56.027853 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.027882 kubelet[2745]: E1105 14:57:56.027860 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.043323 kubelet[2745]: E1105 14:57:56.043293 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.043323 kubelet[2745]: W1105 14:57:56.043312 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.043408 kubelet[2745]: E1105 14:57:56.043329 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.043540 kubelet[2745]: E1105 14:57:56.043527 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.043540 kubelet[2745]: W1105 14:57:56.043537 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.043626 kubelet[2745]: E1105 14:57:56.043546 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.043764 kubelet[2745]: E1105 14:57:56.043737 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.043764 kubelet[2745]: W1105 14:57:56.043749 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.043764 kubelet[2745]: E1105 14:57:56.043760 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.043999 kubelet[2745]: E1105 14:57:56.043969 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.043999 kubelet[2745]: W1105 14:57:56.043986 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.043999 kubelet[2745]: E1105 14:57:56.043997 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.044190 kubelet[2745]: E1105 14:57:56.044168 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.044190 kubelet[2745]: W1105 14:57:56.044179 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.044190 kubelet[2745]: E1105 14:57:56.044187 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.044339 kubelet[2745]: E1105 14:57:56.044328 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.044367 kubelet[2745]: W1105 14:57:56.044345 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.044367 kubelet[2745]: E1105 14:57:56.044355 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.044526 kubelet[2745]: E1105 14:57:56.044516 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.044549 kubelet[2745]: W1105 14:57:56.044526 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.044549 kubelet[2745]: E1105 14:57:56.044534 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.045016 kubelet[2745]: E1105 14:57:56.044988 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.045016 kubelet[2745]: W1105 14:57:56.045003 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.045016 kubelet[2745]: E1105 14:57:56.045014 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.045183 kubelet[2745]: E1105 14:57:56.045171 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.045183 kubelet[2745]: W1105 14:57:56.045181 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.045227 kubelet[2745]: E1105 14:57:56.045189 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.045335 kubelet[2745]: E1105 14:57:56.045324 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.045335 kubelet[2745]: W1105 14:57:56.045334 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.045381 kubelet[2745]: E1105 14:57:56.045341 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.045469 kubelet[2745]: E1105 14:57:56.045459 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.045489 kubelet[2745]: W1105 14:57:56.045468 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.045489 kubelet[2745]: E1105 14:57:56.045475 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.045664 kubelet[2745]: E1105 14:57:56.045650 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.045664 kubelet[2745]: W1105 14:57:56.045661 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.045717 kubelet[2745]: E1105 14:57:56.045669 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.045840 kubelet[2745]: E1105 14:57:56.045827 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.045862 kubelet[2745]: W1105 14:57:56.045839 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.045862 kubelet[2745]: E1105 14:57:56.045847 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.046068 kubelet[2745]: E1105 14:57:56.046057 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.046093 kubelet[2745]: W1105 14:57:56.046068 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.046093 kubelet[2745]: E1105 14:57:56.046077 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.046246 kubelet[2745]: E1105 14:57:56.046234 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.046246 kubelet[2745]: W1105 14:57:56.046244 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.046294 kubelet[2745]: E1105 14:57:56.046252 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.046426 kubelet[2745]: E1105 14:57:56.046414 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.046426 kubelet[2745]: W1105 14:57:56.046424 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.046475 kubelet[2745]: E1105 14:57:56.046432 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.046742 kubelet[2745]: E1105 14:57:56.046729 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.046742 kubelet[2745]: W1105 14:57:56.046741 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.046792 kubelet[2745]: E1105 14:57:56.046750 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.046941 kubelet[2745]: E1105 14:57:56.046929 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 14:57:56.046970 kubelet[2745]: W1105 14:57:56.046940 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 14:57:56.046970 kubelet[2745]: E1105 14:57:56.046950 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 14:57:56.857308 containerd[1592]: time="2025-11-05T14:57:56.856874374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:56.857875 containerd[1592]: time="2025-11-05T14:57:56.857807603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 14:57:56.858623 containerd[1592]: time="2025-11-05T14:57:56.858598836Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:56.860612 containerd[1592]: time="2025-11-05T14:57:56.860560716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:57:56.861349 containerd[1592]: time="2025-11-05T14:57:56.861310060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.293115273s" Nov 5 14:57:56.861349 containerd[1592]: time="2025-11-05T14:57:56.861345148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 14:57:56.865485 containerd[1592]: time="2025-11-05T14:57:56.865444071Z" level=info msg="CreateContainer within sandbox \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 14:57:56.873848 containerd[1592]: time="2025-11-05T14:57:56.872732934Z" level=info msg="Container 113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:57:56.882528 containerd[1592]: time="2025-11-05T14:57:56.882472037Z" level=info msg="CreateContainer within sandbox \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\"" Nov 5 14:57:56.884438 containerd[1592]: time="2025-11-05T14:57:56.884396988Z" level=info msg="StartContainer for \"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\"" Nov 5 14:57:56.886300 containerd[1592]: time="2025-11-05T14:57:56.886236678Z" level=info msg="connecting to shim 113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d" address="unix:///run/containerd/s/4be17a40e64d84a187af0479cdfe9b60354e1daee32f196b035636d5a0d59417" protocol=ttrpc version=3 Nov 5 14:57:56.911167 systemd[1]: Started cri-containerd-113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d.scope - libcontainer container 113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d. Nov 5 14:57:56.948713 containerd[1592]: time="2025-11-05T14:57:56.948674795Z" level=info msg="StartContainer for \"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\" returns successfully" Nov 5 14:57:56.957410 kubelet[2745]: I1105 14:57:56.957377 2745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 14:57:56.958264 kubelet[2745]: E1105 14:57:56.957906 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:56.958264 kubelet[2745]: E1105 14:57:56.958041 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:56.964790 systemd[1]: cri-containerd-113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d.scope: Deactivated successfully. Nov 5 14:57:56.965086 systemd[1]: cri-containerd-113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d.scope: Consumed 28ms CPU time, 6.1M memory peak, 4.5M written to disk. Nov 5 14:57:56.984594 containerd[1592]: time="2025-11-05T14:57:56.984520525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\" id:\"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\" pid:3459 exited_at:{seconds:1762354676 nanos:965793383}" Nov 5 14:57:56.990174 containerd[1592]: time="2025-11-05T14:57:56.990108613Z" level=info msg="received exit event container_id:\"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\" id:\"113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d\" pid:3459 exited_at:{seconds:1762354676 nanos:965793383}" Nov 5 14:57:57.019948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-113d565a7d411c2c47f1b58c6dfd4be38f0bfeee2169270b8c1d3d426c15e97d-rootfs.mount: Deactivated successfully. Nov 5 14:57:57.881630 kubelet[2745]: E1105 14:57:57.881262 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:57:57.961377 kubelet[2745]: E1105 14:57:57.961331 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:57.962653 containerd[1592]: time="2025-11-05T14:57:57.962619151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 14:57:58.550070 kubelet[2745]: I1105 14:57:58.550011 2745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 14:57:58.550398 kubelet[2745]: E1105 14:57:58.550368 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:58.963326 kubelet[2745]: E1105 14:57:58.963131 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:57:59.886133 kubelet[2745]: E1105 14:57:59.886075 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:00.296552 containerd[1592]: time="2025-11-05T14:58:00.296508364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:00.297472 containerd[1592]: time="2025-11-05T14:58:00.297266961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 14:58:00.298186 containerd[1592]: time="2025-11-05T14:58:00.298150985Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:00.300429 containerd[1592]: time="2025-11-05T14:58:00.300393372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:00.301179 containerd[1592]: time="2025-11-05T14:58:00.301149249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.338490209s" Nov 5 14:58:00.301245 containerd[1592]: time="2025-11-05T14:58:00.301180535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 14:58:00.306343 containerd[1592]: time="2025-11-05T14:58:00.306294119Z" level=info msg="CreateContainer within sandbox \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 14:58:00.313607 containerd[1592]: time="2025-11-05T14:58:00.313267089Z" level=info msg="Container ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:58:00.322388 containerd[1592]: time="2025-11-05T14:58:00.322338736Z" level=info msg="CreateContainer within sandbox \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\"" Nov 5 14:58:00.322858 containerd[1592]: time="2025-11-05T14:58:00.322835919Z" level=info msg="StartContainer for \"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\"" Nov 5 14:58:00.324253 containerd[1592]: time="2025-11-05T14:58:00.324228289Z" level=info msg="connecting to shim ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338" address="unix:///run/containerd/s/4be17a40e64d84a187af0479cdfe9b60354e1daee32f196b035636d5a0d59417" protocol=ttrpc version=3 Nov 5 14:58:00.345762 systemd[1]: Started cri-containerd-ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338.scope - libcontainer container ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338. Nov 5 14:58:00.450046 containerd[1592]: time="2025-11-05T14:58:00.449996206Z" level=info msg="StartContainer for \"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\" returns successfully" Nov 5 14:58:00.947608 systemd[1]: cri-containerd-ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338.scope: Deactivated successfully. Nov 5 14:58:00.948162 systemd[1]: cri-containerd-ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338.scope: Consumed 470ms CPU time, 176.3M memory peak, 1.1M read from disk, 165.9M written to disk. Nov 5 14:58:00.948754 containerd[1592]: time="2025-11-05T14:58:00.948698888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\" id:\"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\" pid:3520 exited_at:{seconds:1762354680 nanos:948412868}" Nov 5 14:58:00.948754 containerd[1592]: time="2025-11-05T14:58:00.948734175Z" level=info msg="received exit event container_id:\"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\" id:\"ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338\" pid:3520 exited_at:{seconds:1762354680 nanos:948412868}" Nov 5 14:58:00.966781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea56d6508111b38e9ea105497d61c136ee84621a8013881ee7d2621f47b1b338-rootfs.mount: Deactivated successfully. Nov 5 14:58:00.976374 kubelet[2745]: E1105 14:58:00.976330 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:01.026865 kubelet[2745]: I1105 14:58:01.026837 2745 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 14:58:01.073993 systemd[1]: Created slice kubepods-burstable-pod6ecbe89f_b7e4_4c8f_aaf0_455d2c1b6092.slice - libcontainer container kubepods-burstable-pod6ecbe89f_b7e4_4c8f_aaf0_455d2c1b6092.slice. Nov 5 14:58:01.081539 systemd[1]: Created slice kubepods-burstable-pod644366bd_9b19_452f_ae30_d5eda30abc3c.slice - libcontainer container kubepods-burstable-pod644366bd_9b19_452f_ae30_d5eda30abc3c.slice. Nov 5 14:58:01.091410 systemd[1]: Created slice kubepods-besteffort-pod2321b872_0e6b_4e50_ac14_19211c4fd305.slice - libcontainer container kubepods-besteffort-pod2321b872_0e6b_4e50_ac14_19211c4fd305.slice. Nov 5 14:58:01.097832 systemd[1]: Created slice kubepods-besteffort-pod33531715_49e1_4eed_bcb0_1c3ea6fda04e.slice - libcontainer container kubepods-besteffort-pod33531715_49e1_4eed_bcb0_1c3ea6fda04e.slice. Nov 5 14:58:01.102640 systemd[1]: Created slice kubepods-besteffort-pod6bb57cd3_dd1b_489b_86e0_4fd3b7b01f3f.slice - libcontainer container kubepods-besteffort-pod6bb57cd3_dd1b_489b_86e0_4fd3b7b01f3f.slice. Nov 5 14:58:01.108710 systemd[1]: Created slice kubepods-besteffort-pod8e2929ec_365a_4dc4_8ec5_85de67c22423.slice - libcontainer container kubepods-besteffort-pod8e2929ec_365a_4dc4_8ec5_85de67c22423.slice. Nov 5 14:58:01.115107 systemd[1]: Created slice kubepods-besteffort-pod4c1ded08_f27a_434b_a1c1_b9344d831e1e.slice - libcontainer container kubepods-besteffort-pod4c1ded08_f27a_434b_a1c1_b9344d831e1e.slice. Nov 5 14:58:01.120753 systemd[1]: Created slice kubepods-besteffort-pod3166bd9d_6937_48f1_bdd7_75be51da06f6.slice - libcontainer container kubepods-besteffort-pod3166bd9d_6937_48f1_bdd7_75be51da06f6.slice. Nov 5 14:58:01.176898 kubelet[2745]: I1105 14:58:01.176808 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-ca-bundle\") pod \"whisker-6bcdbc5c6-69t4z\" (UID: \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\") " pod="calico-system/whisker-6bcdbc5c6-69t4z" Nov 5 14:58:01.176898 kubelet[2745]: I1105 14:58:01.176885 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c1ded08-f27a-434b-a1c1-b9344d831e1e-calico-apiserver-certs\") pod \"calico-apiserver-5d949b5cc6-z27s7\" (UID: \"4c1ded08-f27a-434b-a1c1-b9344d831e1e\") " pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" Nov 5 14:58:01.177073 kubelet[2745]: I1105 14:58:01.176936 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw75s\" (UniqueName: \"kubernetes.io/projected/6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092-kube-api-access-jw75s\") pod \"coredns-674b8bbfcf-tkl5d\" (UID: \"6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092\") " pod="kube-system/coredns-674b8bbfcf-tkl5d" Nov 5 14:58:01.177073 kubelet[2745]: I1105 14:58:01.176955 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgc7q\" (UniqueName: \"kubernetes.io/projected/644366bd-9b19-452f-ae30-d5eda30abc3c-kube-api-access-xgc7q\") pod \"coredns-674b8bbfcf-bn2pw\" (UID: \"644366bd-9b19-452f-ae30-d5eda30abc3c\") " pod="kube-system/coredns-674b8bbfcf-bn2pw" Nov 5 14:58:01.177073 kubelet[2745]: I1105 14:58:01.176972 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knc8\" (UniqueName: \"kubernetes.io/projected/3166bd9d-6937-48f1-bdd7-75be51da06f6-kube-api-access-2knc8\") pod \"calico-apiserver-5d949b5cc6-wpvtb\" (UID: \"3166bd9d-6937-48f1-bdd7-75be51da06f6\") " pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" Nov 5 14:58:01.177073 kubelet[2745]: I1105 14:58:01.177003 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6d6c\" (UniqueName: \"kubernetes.io/projected/33531715-49e1-4eed-bcb0-1c3ea6fda04e-kube-api-access-f6d6c\") pod \"whisker-6bcdbc5c6-69t4z\" (UID: \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\") " pod="calico-system/whisker-6bcdbc5c6-69t4z" Nov 5 14:58:01.177073 kubelet[2745]: I1105 14:58:01.177022 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp52f\" (UniqueName: \"kubernetes.io/projected/8e2929ec-365a-4dc4-8ec5-85de67c22423-kube-api-access-lp52f\") pod \"calico-kube-controllers-6f4ff9db77-mfhq4\" (UID: \"8e2929ec-365a-4dc4-8ec5-85de67c22423\") " pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" Nov 5 14:58:01.177193 kubelet[2745]: I1105 14:58:01.177039 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2321b872-0e6b-4e50-ac14-19211c4fd305-goldmane-key-pair\") pod \"goldmane-666569f655-whbrw\" (UID: \"2321b872-0e6b-4e50-ac14-19211c4fd305\") " pod="calico-system/goldmane-666569f655-whbrw" Nov 5 14:58:01.177193 kubelet[2745]: I1105 14:58:01.177058 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f-calico-apiserver-certs\") pod \"calico-apiserver-845b884f7d-rtl7g\" (UID: \"6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f\") " pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" Nov 5 14:58:01.177193 kubelet[2745]: I1105 14:58:01.177106 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/644366bd-9b19-452f-ae30-d5eda30abc3c-config-volume\") pod \"coredns-674b8bbfcf-bn2pw\" (UID: \"644366bd-9b19-452f-ae30-d5eda30abc3c\") " pod="kube-system/coredns-674b8bbfcf-bn2pw" Nov 5 14:58:01.177193 kubelet[2745]: I1105 14:58:01.177173 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2929ec-365a-4dc4-8ec5-85de67c22423-tigera-ca-bundle\") pod \"calico-kube-controllers-6f4ff9db77-mfhq4\" (UID: \"8e2929ec-365a-4dc4-8ec5-85de67c22423\") " pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" Nov 5 14:58:01.177280 kubelet[2745]: I1105 14:58:01.177205 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sql46\" (UniqueName: \"kubernetes.io/projected/4c1ded08-f27a-434b-a1c1-b9344d831e1e-kube-api-access-sql46\") pod \"calico-apiserver-5d949b5cc6-z27s7\" (UID: \"4c1ded08-f27a-434b-a1c1-b9344d831e1e\") " pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" Nov 5 14:58:01.177305 kubelet[2745]: I1105 14:58:01.177275 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2321b872-0e6b-4e50-ac14-19211c4fd305-goldmane-ca-bundle\") pod \"goldmane-666569f655-whbrw\" (UID: \"2321b872-0e6b-4e50-ac14-19211c4fd305\") " pod="calico-system/goldmane-666569f655-whbrw" Nov 5 14:58:01.177330 kubelet[2745]: I1105 14:58:01.177306 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3166bd9d-6937-48f1-bdd7-75be51da06f6-calico-apiserver-certs\") pod \"calico-apiserver-5d949b5cc6-wpvtb\" (UID: \"3166bd9d-6937-48f1-bdd7-75be51da06f6\") " pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" Nov 5 14:58:01.177353 kubelet[2745]: I1105 14:58:01.177333 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-backend-key-pair\") pod \"whisker-6bcdbc5c6-69t4z\" (UID: \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\") " pod="calico-system/whisker-6bcdbc5c6-69t4z" Nov 5 14:58:01.177353 kubelet[2745]: I1105 14:58:01.177350 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f765m\" (UniqueName: \"kubernetes.io/projected/2321b872-0e6b-4e50-ac14-19211c4fd305-kube-api-access-f765m\") pod \"goldmane-666569f655-whbrw\" (UID: \"2321b872-0e6b-4e50-ac14-19211c4fd305\") " pod="calico-system/goldmane-666569f655-whbrw" Nov 5 14:58:01.177396 kubelet[2745]: I1105 14:58:01.177370 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9j2\" (UniqueName: \"kubernetes.io/projected/6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f-kube-api-access-4r9j2\") pod \"calico-apiserver-845b884f7d-rtl7g\" (UID: \"6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f\") " pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" Nov 5 14:58:01.177396 kubelet[2745]: I1105 14:58:01.177387 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2321b872-0e6b-4e50-ac14-19211c4fd305-config\") pod \"goldmane-666569f655-whbrw\" (UID: \"2321b872-0e6b-4e50-ac14-19211c4fd305\") " pod="calico-system/goldmane-666569f655-whbrw" Nov 5 14:58:01.177441 kubelet[2745]: I1105 14:58:01.177406 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092-config-volume\") pod \"coredns-674b8bbfcf-tkl5d\" (UID: \"6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092\") " pod="kube-system/coredns-674b8bbfcf-tkl5d" Nov 5 14:58:01.379225 kubelet[2745]: E1105 14:58:01.379153 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:01.380470 containerd[1592]: time="2025-11-05T14:58:01.380422681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tkl5d,Uid:6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092,Namespace:kube-system,Attempt:0,}" Nov 5 14:58:01.387872 kubelet[2745]: E1105 14:58:01.387841 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:01.388489 containerd[1592]: time="2025-11-05T14:58:01.388365791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bn2pw,Uid:644366bd-9b19-452f-ae30-d5eda30abc3c,Namespace:kube-system,Attempt:0,}" Nov 5 14:58:01.397877 containerd[1592]: time="2025-11-05T14:58:01.397841488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-whbrw,Uid:2321b872-0e6b-4e50-ac14-19211c4fd305,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:01.401519 containerd[1592]: time="2025-11-05T14:58:01.401334948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bcdbc5c6-69t4z,Uid:33531715-49e1-4eed-bcb0-1c3ea6fda04e,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:01.407299 containerd[1592]: time="2025-11-05T14:58:01.407265215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b884f7d-rtl7g,Uid:6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:58:01.416075 containerd[1592]: time="2025-11-05T14:58:01.415916267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4ff9db77-mfhq4,Uid:8e2929ec-365a-4dc4-8ec5-85de67c22423,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:01.419222 containerd[1592]: time="2025-11-05T14:58:01.419174799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-z27s7,Uid:4c1ded08-f27a-434b-a1c1-b9344d831e1e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:58:01.427278 containerd[1592]: time="2025-11-05T14:58:01.427224411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-wpvtb,Uid:3166bd9d-6937-48f1-bdd7-75be51da06f6,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:58:01.519869 containerd[1592]: time="2025-11-05T14:58:01.519807666Z" level=error msg="Failed to destroy network for sandbox \"7d5b66ae18e12b363048dd25ace42421701629610af8ee0c86ea0d0e6b5927bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.522769 containerd[1592]: time="2025-11-05T14:58:01.522714888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bn2pw,Uid:644366bd-9b19-452f-ae30-d5eda30abc3c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5b66ae18e12b363048dd25ace42421701629610af8ee0c86ea0d0e6b5927bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.523598 containerd[1592]: time="2025-11-05T14:58:01.523434632Z" level=error msg="Failed to destroy network for sandbox \"f6d751fa6aa821d49d9e61d59b67dc0d3834a56914fbcca71b71ec628de37ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.524526 containerd[1592]: time="2025-11-05T14:58:01.524476320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b884f7d-rtl7g,Uid:6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d751fa6aa821d49d9e61d59b67dc0d3834a56914fbcca71b71ec628de37ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.525340 kubelet[2745]: E1105 14:58:01.525109 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d751fa6aa821d49d9e61d59b67dc0d3834a56914fbcca71b71ec628de37ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.525527 kubelet[2745]: E1105 14:58:01.525499 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d751fa6aa821d49d9e61d59b67dc0d3834a56914fbcca71b71ec628de37ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" Nov 5 14:58:01.525983 kubelet[2745]: E1105 14:58:01.525959 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d751fa6aa821d49d9e61d59b67dc0d3834a56914fbcca71b71ec628de37ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" Nov 5 14:58:01.526291 kubelet[2745]: E1105 14:58:01.526102 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845b884f7d-rtl7g_calico-apiserver(6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845b884f7d-rtl7g_calico-apiserver(6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6d751fa6aa821d49d9e61d59b67dc0d3834a56914fbcca71b71ec628de37ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" podUID="6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f" Nov 5 14:58:01.527050 kubelet[2745]: E1105 14:58:01.526866 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5b66ae18e12b363048dd25ace42421701629610af8ee0c86ea0d0e6b5927bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.527050 kubelet[2745]: E1105 14:58:01.526933 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5b66ae18e12b363048dd25ace42421701629610af8ee0c86ea0d0e6b5927bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bn2pw" Nov 5 14:58:01.527050 kubelet[2745]: E1105 14:58:01.526953 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5b66ae18e12b363048dd25ace42421701629610af8ee0c86ea0d0e6b5927bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bn2pw" Nov 5 14:58:01.527290 kubelet[2745]: E1105 14:58:01.527012 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bn2pw_kube-system(644366bd-9b19-452f-ae30-d5eda30abc3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bn2pw_kube-system(644366bd-9b19-452f-ae30-d5eda30abc3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d5b66ae18e12b363048dd25ace42421701629610af8ee0c86ea0d0e6b5927bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bn2pw" podUID="644366bd-9b19-452f-ae30-d5eda30abc3c" Nov 5 14:58:01.534512 containerd[1592]: time="2025-11-05T14:58:01.534461239Z" level=error msg="Failed to destroy network for sandbox \"190dd5f6e8897a25d05f6ee86532550b5f9f27cf6287ea52ad5d4973a6c80b43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.535140 containerd[1592]: time="2025-11-05T14:58:01.535110809Z" level=error msg="Failed to destroy network for sandbox \"dfeff6f07e2d7a268758a1f05d60f3ba4659b9236449a6aea3f23ba63c48f979\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.535661 containerd[1592]: time="2025-11-05T14:58:01.535631793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-whbrw,Uid:2321b872-0e6b-4e50-ac14-19211c4fd305,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"190dd5f6e8897a25d05f6ee86532550b5f9f27cf6287ea52ad5d4973a6c80b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.536188 kubelet[2745]: E1105 14:58:01.536133 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"190dd5f6e8897a25d05f6ee86532550b5f9f27cf6287ea52ad5d4973a6c80b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.536348 kubelet[2745]: E1105 14:58:01.536328 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"190dd5f6e8897a25d05f6ee86532550b5f9f27cf6287ea52ad5d4973a6c80b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-whbrw" Nov 5 14:58:01.536936 kubelet[2745]: E1105 14:58:01.536621 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"190dd5f6e8897a25d05f6ee86532550b5f9f27cf6287ea52ad5d4973a6c80b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-whbrw" Nov 5 14:58:01.536936 kubelet[2745]: E1105 14:58:01.536694 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-whbrw_calico-system(2321b872-0e6b-4e50-ac14-19211c4fd305)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-whbrw_calico-system(2321b872-0e6b-4e50-ac14-19211c4fd305)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"190dd5f6e8897a25d05f6ee86532550b5f9f27cf6287ea52ad5d4973a6c80b43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-whbrw" podUID="2321b872-0e6b-4e50-ac14-19211c4fd305" Nov 5 14:58:01.537439 containerd[1592]: time="2025-11-05T14:58:01.537411310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-wpvtb,Uid:3166bd9d-6937-48f1-bdd7-75be51da06f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeff6f07e2d7a268758a1f05d60f3ba4659b9236449a6aea3f23ba63c48f979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.537703 kubelet[2745]: E1105 14:58:01.537673 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeff6f07e2d7a268758a1f05d60f3ba4659b9236449a6aea3f23ba63c48f979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.537752 kubelet[2745]: E1105 14:58:01.537714 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeff6f07e2d7a268758a1f05d60f3ba4659b9236449a6aea3f23ba63c48f979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" Nov 5 14:58:01.537752 kubelet[2745]: E1105 14:58:01.537733 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeff6f07e2d7a268758a1f05d60f3ba4659b9236449a6aea3f23ba63c48f979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" Nov 5 14:58:01.538609 kubelet[2745]: E1105 14:58:01.537774 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d949b5cc6-wpvtb_calico-apiserver(3166bd9d-6937-48f1-bdd7-75be51da06f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d949b5cc6-wpvtb_calico-apiserver(3166bd9d-6937-48f1-bdd7-75be51da06f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfeff6f07e2d7a268758a1f05d60f3ba4659b9236449a6aea3f23ba63c48f979\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" podUID="3166bd9d-6937-48f1-bdd7-75be51da06f6" Nov 5 14:58:01.542501 containerd[1592]: time="2025-11-05T14:58:01.542464281Z" level=error msg="Failed to destroy network for sandbox \"9d36e46375b2b2ad504fe64d1ddb1275c7a18936b3dedbfe864a02afd418fe77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.544420 containerd[1592]: time="2025-11-05T14:58:01.544381065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tkl5d,Uid:6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d36e46375b2b2ad504fe64d1ddb1275c7a18936b3dedbfe864a02afd418fe77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.545085 kubelet[2745]: E1105 14:58:01.544748 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d36e46375b2b2ad504fe64d1ddb1275c7a18936b3dedbfe864a02afd418fe77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.545484 kubelet[2745]: E1105 14:58:01.545209 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d36e46375b2b2ad504fe64d1ddb1275c7a18936b3dedbfe864a02afd418fe77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tkl5d" Nov 5 14:58:01.545484 kubelet[2745]: E1105 14:58:01.545235 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d36e46375b2b2ad504fe64d1ddb1275c7a18936b3dedbfe864a02afd418fe77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tkl5d" Nov 5 14:58:01.545484 kubelet[2745]: E1105 14:58:01.545281 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tkl5d_kube-system(6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tkl5d_kube-system(6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d36e46375b2b2ad504fe64d1ddb1275c7a18936b3dedbfe864a02afd418fe77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tkl5d" podUID="6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092" Nov 5 14:58:01.551750 containerd[1592]: time="2025-11-05T14:58:01.551706252Z" level=error msg="Failed to destroy network for sandbox \"f17e2a073c29d1182729304448d07df4ffd1c8c75fead10bf7bd309e96d0b610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.552926 containerd[1592]: time="2025-11-05T14:58:01.552884047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bcdbc5c6-69t4z,Uid:33531715-49e1-4eed-bcb0-1c3ea6fda04e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17e2a073c29d1182729304448d07df4ffd1c8c75fead10bf7bd309e96d0b610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.553164 kubelet[2745]: E1105 14:58:01.553120 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17e2a073c29d1182729304448d07df4ffd1c8c75fead10bf7bd309e96d0b610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.553269 kubelet[2745]: E1105 14:58:01.553254 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17e2a073c29d1182729304448d07df4ffd1c8c75fead10bf7bd309e96d0b610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bcdbc5c6-69t4z" Nov 5 14:58:01.553339 kubelet[2745]: E1105 14:58:01.553324 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17e2a073c29d1182729304448d07df4ffd1c8c75fead10bf7bd309e96d0b610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bcdbc5c6-69t4z" Nov 5 14:58:01.553468 kubelet[2745]: E1105 14:58:01.553423 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bcdbc5c6-69t4z_calico-system(33531715-49e1-4eed-bcb0-1c3ea6fda04e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bcdbc5c6-69t4z_calico-system(33531715-49e1-4eed-bcb0-1c3ea6fda04e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f17e2a073c29d1182729304448d07df4ffd1c8c75fead10bf7bd309e96d0b610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bcdbc5c6-69t4z" podUID="33531715-49e1-4eed-bcb0-1c3ea6fda04e" Nov 5 14:58:01.555976 containerd[1592]: time="2025-11-05T14:58:01.555939459Z" level=error msg="Failed to destroy network for sandbox \"70c821d3d63633ba921d66cebe1cf0e3f8be499871e3177ba153007e24c8578e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.557097 containerd[1592]: time="2025-11-05T14:58:01.557053602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4ff9db77-mfhq4,Uid:8e2929ec-365a-4dc4-8ec5-85de67c22423,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c821d3d63633ba921d66cebe1cf0e3f8be499871e3177ba153007e24c8578e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.557569 kubelet[2745]: E1105 14:58:01.557220 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c821d3d63633ba921d66cebe1cf0e3f8be499871e3177ba153007e24c8578e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.557569 kubelet[2745]: E1105 14:58:01.557260 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c821d3d63633ba921d66cebe1cf0e3f8be499871e3177ba153007e24c8578e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" Nov 5 14:58:01.557569 kubelet[2745]: E1105 14:58:01.557277 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c821d3d63633ba921d66cebe1cf0e3f8be499871e3177ba153007e24c8578e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" Nov 5 14:58:01.558011 kubelet[2745]: E1105 14:58:01.557315 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f4ff9db77-mfhq4_calico-system(8e2929ec-365a-4dc4-8ec5-85de67c22423)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f4ff9db77-mfhq4_calico-system(8e2929ec-365a-4dc4-8ec5-85de67c22423)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70c821d3d63633ba921d66cebe1cf0e3f8be499871e3177ba153007e24c8578e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" podUID="8e2929ec-365a-4dc4-8ec5-85de67c22423" Nov 5 14:58:01.565285 containerd[1592]: time="2025-11-05T14:58:01.565245082Z" level=error msg="Failed to destroy network for sandbox \"0ec1cd0e3d9a2657f0527c6eb7cb63bffa8613cd3704bd679c867d670ef5607c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.566217 containerd[1592]: time="2025-11-05T14:58:01.566186510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-z27s7,Uid:4c1ded08-f27a-434b-a1c1-b9344d831e1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec1cd0e3d9a2657f0527c6eb7cb63bffa8613cd3704bd679c867d670ef5607c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.566622 kubelet[2745]: E1105 14:58:01.566373 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec1cd0e3d9a2657f0527c6eb7cb63bffa8613cd3704bd679c867d670ef5607c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.566622 kubelet[2745]: E1105 14:58:01.566428 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec1cd0e3d9a2657f0527c6eb7cb63bffa8613cd3704bd679c867d670ef5607c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" Nov 5 14:58:01.566622 kubelet[2745]: E1105 14:58:01.566448 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec1cd0e3d9a2657f0527c6eb7cb63bffa8613cd3704bd679c867d670ef5607c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" Nov 5 14:58:01.566724 kubelet[2745]: E1105 14:58:01.566488 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d949b5cc6-z27s7_calico-apiserver(4c1ded08-f27a-434b-a1c1-b9344d831e1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d949b5cc6-z27s7_calico-apiserver(4c1ded08-f27a-434b-a1c1-b9344d831e1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ec1cd0e3d9a2657f0527c6eb7cb63bffa8613cd3704bd679c867d670ef5607c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" podUID="4c1ded08-f27a-434b-a1c1-b9344d831e1e" Nov 5 14:58:01.888047 systemd[1]: Created slice kubepods-besteffort-pod72dac233_4b2b_4265_b846_29435de8b196.slice - libcontainer container kubepods-besteffort-pod72dac233_4b2b_4265_b846_29435de8b196.slice. Nov 5 14:58:01.890703 containerd[1592]: time="2025-11-05T14:58:01.890671031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm97f,Uid:72dac233-4b2b-4265-b846-29435de8b196,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:01.936080 containerd[1592]: time="2025-11-05T14:58:01.936036193Z" level=error msg="Failed to destroy network for sandbox \"5d012732b01b945e200900d9edec85880ce55c3597d5cac85d81820263226191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.937403 containerd[1592]: time="2025-11-05T14:58:01.937292085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm97f,Uid:72dac233-4b2b-4265-b846-29435de8b196,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d012732b01b945e200900d9edec85880ce55c3597d5cac85d81820263226191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.937561 kubelet[2745]: E1105 14:58:01.937523 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d012732b01b945e200900d9edec85880ce55c3597d5cac85d81820263226191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 14:58:01.937623 kubelet[2745]: E1105 14:58:01.937593 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d012732b01b945e200900d9edec85880ce55c3597d5cac85d81820263226191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tm97f" Nov 5 14:58:01.937686 kubelet[2745]: E1105 14:58:01.937620 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d012732b01b945e200900d9edec85880ce55c3597d5cac85d81820263226191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tm97f" Nov 5 14:58:01.937686 kubelet[2745]: E1105 14:58:01.937670 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tm97f_calico-system(72dac233-4b2b-4265-b846-29435de8b196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tm97f_calico-system(72dac233-4b2b-4265-b846-29435de8b196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d012732b01b945e200900d9edec85880ce55c3597d5cac85d81820263226191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:01.980684 kubelet[2745]: E1105 14:58:01.980639 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:01.984364 containerd[1592]: time="2025-11-05T14:58:01.984329261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 14:58:02.314864 systemd[1]: run-netns-cni\x2d5eb80db1\x2d69b9\x2df920\x2d7ec9\x2d1e73147fbf27.mount: Deactivated successfully. Nov 5 14:58:02.314964 systemd[1]: run-netns-cni\x2d38ca4e47\x2deca9\x2dcab1\x2dcdd9\x2dff055e297e02.mount: Deactivated successfully. Nov 5 14:58:02.315013 systemd[1]: run-netns-cni\x2d184f1170\x2dc3be\x2da30d\x2d4ea2\x2d7eded70414a5.mount: Deactivated successfully. Nov 5 14:58:02.315062 systemd[1]: run-netns-cni\x2d8523ef17\x2d8ab9\x2dac1e\x2d57da\x2ddb7f8eb807ad.mount: Deactivated successfully. Nov 5 14:58:06.124501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211228766.mount: Deactivated successfully. Nov 5 14:58:06.514510 containerd[1592]: time="2025-11-05T14:58:06.514462534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 14:58:06.518683 containerd[1592]: time="2025-11-05T14:58:06.518642756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:06.528088 containerd[1592]: time="2025-11-05T14:58:06.528046696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.543677066s" Nov 5 14:58:06.528088 containerd[1592]: time="2025-11-05T14:58:06.528078621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 14:58:06.530596 containerd[1592]: time="2025-11-05T14:58:06.529247618Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:06.530596 containerd[1592]: time="2025-11-05T14:58:06.529901967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:06.546896 containerd[1592]: time="2025-11-05T14:58:06.546857176Z" level=info msg="CreateContainer within sandbox \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 14:58:06.555305 containerd[1592]: time="2025-11-05T14:58:06.555269869Z" level=info msg="Container 6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:58:06.565546 containerd[1592]: time="2025-11-05T14:58:06.565509029Z" level=info msg="CreateContainer within sandbox \"c8219bb77ba922696951b62f5716813bf00d8adc25c28092c6b9af15be81c31c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\"" Nov 5 14:58:06.566777 containerd[1592]: time="2025-11-05T14:58:06.566752078Z" level=info msg="StartContainer for \"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\"" Nov 5 14:58:06.568403 containerd[1592]: time="2025-11-05T14:58:06.568378351Z" level=info msg="connecting to shim 6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6" address="unix:///run/containerd/s/4be17a40e64d84a187af0479cdfe9b60354e1daee32f196b035636d5a0d59417" protocol=ttrpc version=3 Nov 5 14:58:06.586782 systemd[1]: Started cri-containerd-6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6.scope - libcontainer container 6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6. Nov 5 14:58:06.625418 containerd[1592]: time="2025-11-05T14:58:06.625381967Z" level=info msg="StartContainer for \"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\" returns successfully" Nov 5 14:58:06.747891 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 14:58:06.748047 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 14:58:06.914265 kubelet[2745]: I1105 14:58:06.914106 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6d6c\" (UniqueName: \"kubernetes.io/projected/33531715-49e1-4eed-bcb0-1c3ea6fda04e-kube-api-access-f6d6c\") pod \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\" (UID: \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\") " Nov 5 14:58:06.914265 kubelet[2745]: I1105 14:58:06.914149 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-backend-key-pair\") pod \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\" (UID: \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\") " Nov 5 14:58:06.916052 kubelet[2745]: I1105 14:58:06.914181 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-ca-bundle\") pod \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\" (UID: \"33531715-49e1-4eed-bcb0-1c3ea6fda04e\") " Nov 5 14:58:06.934735 kubelet[2745]: I1105 14:58:06.934660 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33531715-49e1-4eed-bcb0-1c3ea6fda04e-kube-api-access-f6d6c" (OuterVolumeSpecName: "kube-api-access-f6d6c") pod "33531715-49e1-4eed-bcb0-1c3ea6fda04e" (UID: "33531715-49e1-4eed-bcb0-1c3ea6fda04e"). InnerVolumeSpecName "kube-api-access-f6d6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 14:58:06.935029 kubelet[2745]: I1105 14:58:06.934659 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "33531715-49e1-4eed-bcb0-1c3ea6fda04e" (UID: "33531715-49e1-4eed-bcb0-1c3ea6fda04e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 14:58:06.936735 kubelet[2745]: I1105 14:58:06.936627 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "33531715-49e1-4eed-bcb0-1c3ea6fda04e" (UID: "33531715-49e1-4eed-bcb0-1c3ea6fda04e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 14:58:06.996450 kubelet[2745]: E1105 14:58:06.995953 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:07.008334 systemd[1]: Removed slice kubepods-besteffort-pod33531715_49e1_4eed_bcb0_1c3ea6fda04e.slice - libcontainer container kubepods-besteffort-pod33531715_49e1_4eed_bcb0_1c3ea6fda04e.slice. Nov 5 14:58:07.014679 kubelet[2745]: I1105 14:58:07.014545 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ztmpt" podStartSLOduration=1.487568058 podStartE2EDuration="14.014530031s" podCreationTimestamp="2025-11-05 14:57:53 +0000 UTC" firstStartedPulling="2025-11-05 14:57:54.003752011 +0000 UTC m=+23.214306778" lastFinishedPulling="2025-11-05 14:58:06.530713984 +0000 UTC m=+35.741268751" observedRunningTime="2025-11-05 14:58:07.011557867 +0000 UTC m=+36.222112634" watchObservedRunningTime="2025-11-05 14:58:07.014530031 +0000 UTC m=+36.225084758" Nov 5 14:58:07.014974 kubelet[2745]: I1105 14:58:07.014952 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6d6c\" (UniqueName: \"kubernetes.io/projected/33531715-49e1-4eed-bcb0-1c3ea6fda04e-kube-api-access-f6d6c\") on node \"localhost\" DevicePath \"\"" Nov 5 14:58:07.014974 kubelet[2745]: I1105 14:58:07.014974 2745 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 14:58:07.014974 kubelet[2745]: I1105 14:58:07.014984 2745 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33531715-49e1-4eed-bcb0-1c3ea6fda04e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 14:58:07.063935 systemd[1]: Created slice kubepods-besteffort-pod58296e15_e5c3_4a81_a7d0_43e43f184c2f.slice - libcontainer container kubepods-besteffort-pod58296e15_e5c3_4a81_a7d0_43e43f184c2f.slice. Nov 5 14:58:07.116081 kubelet[2745]: I1105 14:58:07.116029 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58296e15-e5c3-4a81-a7d0-43e43f184c2f-whisker-ca-bundle\") pod \"whisker-745cc855db-vwx4d\" (UID: \"58296e15-e5c3-4a81-a7d0-43e43f184c2f\") " pod="calico-system/whisker-745cc855db-vwx4d" Nov 5 14:58:07.116081 kubelet[2745]: I1105 14:58:07.116081 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58296e15-e5c3-4a81-a7d0-43e43f184c2f-whisker-backend-key-pair\") pod \"whisker-745cc855db-vwx4d\" (UID: \"58296e15-e5c3-4a81-a7d0-43e43f184c2f\") " pod="calico-system/whisker-745cc855db-vwx4d" Nov 5 14:58:07.116236 kubelet[2745]: I1105 14:58:07.116107 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmqv\" (UniqueName: \"kubernetes.io/projected/58296e15-e5c3-4a81-a7d0-43e43f184c2f-kube-api-access-pdmqv\") pod \"whisker-745cc855db-vwx4d\" (UID: \"58296e15-e5c3-4a81-a7d0-43e43f184c2f\") " pod="calico-system/whisker-745cc855db-vwx4d" Nov 5 14:58:07.125342 systemd[1]: var-lib-kubelet-pods-33531715\x2d49e1\x2d4eed\x2dbcb0\x2d1c3ea6fda04e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6d6c.mount: Deactivated successfully. Nov 5 14:58:07.125429 systemd[1]: var-lib-kubelet-pods-33531715\x2d49e1\x2d4eed\x2dbcb0\x2d1c3ea6fda04e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 14:58:07.148149 containerd[1592]: time="2025-11-05T14:58:07.148112685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\" id:\"8621f4cbf7e9789e7291a602f90dc51a3faeaf0a8c57b1d428b9d4fc85233ee7\" pid:3941 exit_status:1 exited_at:{seconds:1762354687 nanos:147817677}" Nov 5 14:58:07.367287 containerd[1592]: time="2025-11-05T14:58:07.367245178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745cc855db-vwx4d,Uid:58296e15-e5c3-4a81-a7d0-43e43f184c2f,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:07.517506 systemd-networkd[1487]: cali61977f14169: Link UP Nov 5 14:58:07.517965 systemd-networkd[1487]: cali61977f14169: Gained carrier Nov 5 14:58:07.538123 containerd[1592]: 2025-11-05 14:58:07.388 [INFO][3956] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 14:58:07.538123 containerd[1592]: 2025-11-05 14:58:07.418 [INFO][3956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--745cc855db--vwx4d-eth0 whisker-745cc855db- calico-system 58296e15-e5c3-4a81-a7d0-43e43f184c2f 930 0 2025-11-05 14:58:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:745cc855db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-745cc855db-vwx4d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali61977f14169 [] [] }} ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-" Nov 5 14:58:07.538123 containerd[1592]: 2025-11-05 14:58:07.418 [INFO][3956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.538123 containerd[1592]: 2025-11-05 14:58:07.477 [INFO][3970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" HandleID="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Workload="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.477 [INFO][3970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" HandleID="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Workload="localhost-k8s-whisker--745cc855db--vwx4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000344f20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-745cc855db-vwx4d", "timestamp":"2025-11-05 14:58:07.47733733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.477 [INFO][3970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.477 [INFO][3970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.477 [INFO][3970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.487 [INFO][3970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" host="localhost" Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.492 [INFO][3970] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.495 [INFO][3970] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.497 [INFO][3970] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.499 [INFO][3970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:07.538541 containerd[1592]: 2025-11-05 14:58:07.499 [INFO][3970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" host="localhost" Nov 5 14:58:07.538760 containerd[1592]: 2025-11-05 14:58:07.500 [INFO][3970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c Nov 5 14:58:07.538760 containerd[1592]: 2025-11-05 14:58:07.503 [INFO][3970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" host="localhost" Nov 5 14:58:07.538760 containerd[1592]: 2025-11-05 14:58:07.508 [INFO][3970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" host="localhost" Nov 5 14:58:07.538760 containerd[1592]: 2025-11-05 14:58:07.508 [INFO][3970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" host="localhost" Nov 5 14:58:07.538760 containerd[1592]: 2025-11-05 14:58:07.508 [INFO][3970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:07.538760 containerd[1592]: 2025-11-05 14:58:07.508 [INFO][3970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" HandleID="k8s-pod-network.8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Workload="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.538870 containerd[1592]: 2025-11-05 14:58:07.511 [INFO][3956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--745cc855db--vwx4d-eth0", GenerateName:"whisker-745cc855db-", Namespace:"calico-system", SelfLink:"", UID:"58296e15-e5c3-4a81-a7d0-43e43f184c2f", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"745cc855db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-745cc855db-vwx4d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali61977f14169", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:07.538870 containerd[1592]: 2025-11-05 14:58:07.511 [INFO][3956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.538940 containerd[1592]: 2025-11-05 14:58:07.511 [INFO][3956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61977f14169 ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.538940 containerd[1592]: 2025-11-05 14:58:07.519 [INFO][3956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.538976 containerd[1592]: 2025-11-05 14:58:07.523 [INFO][3956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--745cc855db--vwx4d-eth0", GenerateName:"whisker-745cc855db-", Namespace:"calico-system", SelfLink:"", UID:"58296e15-e5c3-4a81-a7d0-43e43f184c2f", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"745cc855db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c", Pod:"whisker-745cc855db-vwx4d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali61977f14169", MAC:"a6:8d:7f:9a:39:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:07.539022 containerd[1592]: 2025-11-05 14:58:07.534 [INFO][3956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" Namespace="calico-system" Pod="whisker-745cc855db-vwx4d" WorkloadEndpoint="localhost-k8s-whisker--745cc855db--vwx4d-eth0" Nov 5 14:58:07.580038 containerd[1592]: time="2025-11-05T14:58:07.579992112Z" level=info msg="connecting to shim 8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c" address="unix:///run/containerd/s/316742355c8cc02b7e90f5e04a3ebddbc877d52e9790759ec6f73ca8b67deeb2" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:07.608736 systemd[1]: Started cri-containerd-8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c.scope - libcontainer container 8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c. Nov 5 14:58:07.618409 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:07.637298 containerd[1592]: time="2025-11-05T14:58:07.637260030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745cc855db-vwx4d,Uid:58296e15-e5c3-4a81-a7d0-43e43f184c2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d026180d4a15e9c4a019af2a8b450228a3e507e9d58acc3e99ad9dcec06cb7c\"" Nov 5 14:58:07.638875 containerd[1592]: time="2025-11-05T14:58:07.638824004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 14:58:07.924509 containerd[1592]: time="2025-11-05T14:58:07.924381385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:07.978225 containerd[1592]: time="2025-11-05T14:58:07.978156574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 14:58:07.978397 containerd[1592]: time="2025-11-05T14:58:07.978225665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 14:58:07.981237 kubelet[2745]: E1105 14:58:07.981181 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 14:58:07.982643 kubelet[2745]: E1105 14:58:07.982614 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 14:58:07.984155 kubelet[2745]: E1105 14:58:07.983697 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ae650ae248af48029d0a2949d6c21df7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdmqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-745cc855db-vwx4d_calico-system(58296e15-e5c3-4a81-a7d0-43e43f184c2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:07.985554 containerd[1592]: time="2025-11-05T14:58:07.985475045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 14:58:07.997838 kubelet[2745]: E1105 14:58:07.997807 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:08.197178 containerd[1592]: time="2025-11-05T14:58:08.196913278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\" id:\"1c1e22bf7a3193559dbb8f854778dda54568b5977e94d441e5f0fc5e9b64d295\" pid:4064 exit_status:1 exited_at:{seconds:1762354688 nanos:195948886}" Nov 5 14:58:08.204919 containerd[1592]: time="2025-11-05T14:58:08.204764477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:08.228947 containerd[1592]: time="2025-11-05T14:58:08.228140604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 14:58:08.230046 containerd[1592]: time="2025-11-05T14:58:08.228195093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 14:58:08.230127 kubelet[2745]: E1105 14:58:08.229283 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 14:58:08.230127 kubelet[2745]: E1105 14:58:08.229326 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 14:58:08.230245 kubelet[2745]: E1105 14:58:08.229438 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdmqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-745cc855db-vwx4d_calico-system(58296e15-e5c3-4a81-a7d0-43e43f184c2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:08.230764 kubelet[2745]: E1105 14:58:08.230697 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-745cc855db-vwx4d" podUID="58296e15-e5c3-4a81-a7d0-43e43f184c2f" Nov 5 14:58:08.430766 systemd-networkd[1487]: vxlan.calico: Link UP Nov 5 14:58:08.430773 systemd-networkd[1487]: vxlan.calico: Gained carrier Nov 5 14:58:08.857811 systemd-networkd[1487]: cali61977f14169: Gained IPv6LL Nov 5 14:58:08.883246 kubelet[2745]: I1105 14:58:08.883201 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33531715-49e1-4eed-bcb0-1c3ea6fda04e" path="/var/lib/kubelet/pods/33531715-49e1-4eed-bcb0-1c3ea6fda04e/volumes" Nov 5 14:58:08.999757 kubelet[2745]: E1105 14:58:08.999711 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:09.003076 kubelet[2745]: E1105 14:58:09.002910 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-745cc855db-vwx4d" podUID="58296e15-e5c3-4a81-a7d0-43e43f184c2f" Nov 5 14:58:09.079889 containerd[1592]: time="2025-11-05T14:58:09.079690930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\" id:\"64ae5b7ee964f5dcca939f52971c3582b923ee7ca344f24aad3ac2b3ed669b73\" pid:4275 exit_status:1 exited_at:{seconds:1762354689 nanos:79384163}" Nov 5 14:58:10.009682 systemd-networkd[1487]: vxlan.calico: Gained IPv6LL Nov 5 14:58:10.401184 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Nov 5 14:58:10.468394 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:10.469841 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:10.473609 systemd-logind[1568]: New session 8 of user core. Nov 5 14:58:10.480712 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 14:58:10.668393 sshd[4300]: Connection closed by 10.0.0.1 port 58788 Nov 5 14:58:10.668057 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:10.672230 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:58788.service: Deactivated successfully. Nov 5 14:58:10.674022 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 14:58:10.675253 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Nov 5 14:58:10.676853 systemd-logind[1568]: Removed session 8. Nov 5 14:58:12.883270 containerd[1592]: time="2025-11-05T14:58:12.882135634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-whbrw,Uid:2321b872-0e6b-4e50-ac14-19211c4fd305,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:12.883270 containerd[1592]: time="2025-11-05T14:58:12.882526490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-wpvtb,Uid:3166bd9d-6937-48f1-bdd7-75be51da06f6,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:58:12.883270 containerd[1592]: time="2025-11-05T14:58:12.882985754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-z27s7,Uid:4c1ded08-f27a-434b-a1c1-b9344d831e1e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:58:13.041750 systemd-networkd[1487]: calia27af70c73f: Link UP Nov 5 14:58:13.042373 systemd-networkd[1487]: calia27af70c73f: Gained carrier Nov 5 14:58:13.060493 containerd[1592]: 2025-11-05 14:58:12.956 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--whbrw-eth0 goldmane-666569f655- calico-system 2321b872-0e6b-4e50-ac14-19211c4fd305 860 0 2025-11-05 14:57:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-whbrw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia27af70c73f [] [] }} ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-" Nov 5 14:58:13.060493 containerd[1592]: 2025-11-05 14:58:12.957 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.060493 containerd[1592]: 2025-11-05 14:58:12.996 [INFO][4357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" HandleID="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Workload="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:12.996 [INFO][4357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" HandleID="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Workload="localhost-k8s-goldmane--666569f655--whbrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-whbrw", "timestamp":"2025-11-05 14:58:12.996033635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:12.996 [INFO][4357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:12.996 [INFO][4357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:12.996 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:13.009 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" host="localhost" Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:13.015 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:13.021 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:13.023 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:13.025 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:13.060699 containerd[1592]: 2025-11-05 14:58:13.025 [INFO][4357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" host="localhost" Nov 5 14:58:13.060921 containerd[1592]: 2025-11-05 14:58:13.027 [INFO][4357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e Nov 5 14:58:13.060921 containerd[1592]: 2025-11-05 14:58:13.030 [INFO][4357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" host="localhost" Nov 5 14:58:13.060921 containerd[1592]: 2025-11-05 14:58:13.036 [INFO][4357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" host="localhost" Nov 5 14:58:13.060921 containerd[1592]: 2025-11-05 14:58:13.036 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" host="localhost" Nov 5 14:58:13.060921 containerd[1592]: 2025-11-05 14:58:13.036 [INFO][4357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:13.060921 containerd[1592]: 2025-11-05 14:58:13.036 [INFO][4357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" HandleID="k8s-pod-network.02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Workload="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.061046 containerd[1592]: 2025-11-05 14:58:13.038 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--whbrw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2321b872-0e6b-4e50-ac14-19211c4fd305", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-whbrw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia27af70c73f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:13.061046 containerd[1592]: 2025-11-05 14:58:13.038 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.061113 containerd[1592]: 2025-11-05 14:58:13.038 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia27af70c73f ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.061113 containerd[1592]: 2025-11-05 14:58:13.042 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.061150 containerd[1592]: 2025-11-05 14:58:13.042 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--whbrw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2321b872-0e6b-4e50-ac14-19211c4fd305", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e", Pod:"goldmane-666569f655-whbrw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia27af70c73f", MAC:"de:df:13:5d:2e:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:13.061192 containerd[1592]: 2025-11-05 14:58:13.057 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" Namespace="calico-system" Pod="goldmane-666569f655-whbrw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--whbrw-eth0" Nov 5 14:58:13.082279 containerd[1592]: time="2025-11-05T14:58:13.082231605Z" level=info msg="connecting to shim 02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e" address="unix:///run/containerd/s/c8a47073d3d5d8302e3119835b5d76bf5fac91c2809da6196a09b838d51dd7a4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:13.103787 systemd[1]: Started cri-containerd-02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e.scope - libcontainer container 02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e. Nov 5 14:58:13.117014 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:13.150495 containerd[1592]: time="2025-11-05T14:58:13.150259741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-whbrw,Uid:2321b872-0e6b-4e50-ac14-19211c4fd305,Namespace:calico-system,Attempt:0,} returns sandbox id \"02b32f59c10ece0b0602cff8ff6bf03c33eea33ee143e609b3af38ab6565e74e\"" Nov 5 14:58:13.153632 systemd-networkd[1487]: cali85172cceab0: Link UP Nov 5 14:58:13.154411 systemd-networkd[1487]: cali85172cceab0: Gained carrier Nov 5 14:58:13.163124 containerd[1592]: time="2025-11-05T14:58:13.163087382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 14:58:13.170278 containerd[1592]: 2025-11-05 14:58:12.959 [INFO][4342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0 calico-apiserver-5d949b5cc6- calico-apiserver 4c1ded08-f27a-434b-a1c1-b9344d831e1e 864 0 2025-11-05 14:57:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d949b5cc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d949b5cc6-z27s7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85172cceab0 [] [] }} ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-" Nov 5 14:58:13.170278 containerd[1592]: 2025-11-05 14:58:12.959 [INFO][4342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.170278 containerd[1592]: 2025-11-05 14:58:12.999 [INFO][4365] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" HandleID="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Workload="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.000 [INFO][4365] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" HandleID="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Workload="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d949b5cc6-z27s7", "timestamp":"2025-11-05 14:58:12.99997151 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.000 [INFO][4365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.036 [INFO][4365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.036 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.109 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" host="localhost" Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.118 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.122 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.124 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.127 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:13.170462 containerd[1592]: 2025-11-05 14:58:13.127 [INFO][4365] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" host="localhost" Nov 5 14:58:13.171260 containerd[1592]: 2025-11-05 14:58:13.129 [INFO][4365] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b Nov 5 14:58:13.171260 containerd[1592]: 2025-11-05 14:58:13.135 [INFO][4365] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" host="localhost" Nov 5 14:58:13.171260 containerd[1592]: 2025-11-05 14:58:13.141 [INFO][4365] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" host="localhost" Nov 5 14:58:13.171260 containerd[1592]: 2025-11-05 14:58:13.142 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" host="localhost" Nov 5 14:58:13.171260 containerd[1592]: 2025-11-05 14:58:13.142 [INFO][4365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:13.171260 containerd[1592]: 2025-11-05 14:58:13.142 [INFO][4365] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" HandleID="k8s-pod-network.b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Workload="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.171369 containerd[1592]: 2025-11-05 14:58:13.147 [INFO][4342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0", GenerateName:"calico-apiserver-5d949b5cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c1ded08-f27a-434b-a1c1-b9344d831e1e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d949b5cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d949b5cc6-z27s7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85172cceab0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:13.171426 containerd[1592]: 2025-11-05 14:58:13.147 [INFO][4342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.171426 containerd[1592]: 2025-11-05 14:58:13.147 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85172cceab0 ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.171426 containerd[1592]: 2025-11-05 14:58:13.154 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.171483 containerd[1592]: 2025-11-05 14:58:13.155 [INFO][4342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0", GenerateName:"calico-apiserver-5d949b5cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c1ded08-f27a-434b-a1c1-b9344d831e1e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d949b5cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b", Pod:"calico-apiserver-5d949b5cc6-z27s7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85172cceab0", MAC:"ea:77:a5:07:f1:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:13.171843 containerd[1592]: 2025-11-05 14:58:13.166 [INFO][4342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-z27s7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--z27s7-eth0" Nov 5 14:58:13.189992 containerd[1592]: time="2025-11-05T14:58:13.189948988Z" level=info msg="connecting to shim b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b" address="unix:///run/containerd/s/716c7a80a13abecdfa97c10104a6ed683aa2fce7ed6b1716a07150cc5aa53b3b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:13.213752 systemd[1]: Started cri-containerd-b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b.scope - libcontainer container b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b. Nov 5 14:58:13.242993 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:13.256796 systemd-networkd[1487]: calif6d94250986: Link UP Nov 5 14:58:13.257493 systemd-networkd[1487]: calif6d94250986: Gained carrier Nov 5 14:58:13.276418 containerd[1592]: 2025-11-05 14:58:12.962 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0 calico-apiserver-5d949b5cc6- calico-apiserver 3166bd9d-6937-48f1-bdd7-75be51da06f6 862 0 2025-11-05 14:57:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d949b5cc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d949b5cc6-wpvtb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6d94250986 [] [] }} ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-" Nov 5 14:58:13.276418 containerd[1592]: 2025-11-05 14:58:12.963 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.276418 containerd[1592]: 2025-11-05 14:58:13.001 [INFO][4359] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" HandleID="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Workload="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.001 [INFO][4359] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" HandleID="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Workload="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137f10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d949b5cc6-wpvtb", "timestamp":"2025-11-05 14:58:13.001149235 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.001 [INFO][4359] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.142 [INFO][4359] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.142 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.209 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" host="localhost" Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.219 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.223 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.227 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.230 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:13.276823 containerd[1592]: 2025-11-05 14:58:13.230 [INFO][4359] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" host="localhost" Nov 5 14:58:13.277127 containerd[1592]: 2025-11-05 14:58:13.232 [INFO][4359] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8 Nov 5 14:58:13.277127 containerd[1592]: 2025-11-05 14:58:13.237 [INFO][4359] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" host="localhost" Nov 5 14:58:13.277127 containerd[1592]: 2025-11-05 14:58:13.244 [INFO][4359] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" host="localhost" Nov 5 14:58:13.277127 containerd[1592]: 2025-11-05 14:58:13.248 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" host="localhost" Nov 5 14:58:13.277127 containerd[1592]: 2025-11-05 14:58:13.248 [INFO][4359] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:13.277127 containerd[1592]: 2025-11-05 14:58:13.248 [INFO][4359] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" HandleID="k8s-pod-network.147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Workload="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.277242 containerd[1592]: 2025-11-05 14:58:13.252 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0", GenerateName:"calico-apiserver-5d949b5cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3166bd9d-6937-48f1-bdd7-75be51da06f6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d949b5cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d949b5cc6-wpvtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6d94250986", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:13.277297 containerd[1592]: 2025-11-05 14:58:13.253 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.277297 containerd[1592]: 2025-11-05 14:58:13.253 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6d94250986 ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.277297 containerd[1592]: 2025-11-05 14:58:13.257 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.277360 containerd[1592]: 2025-11-05 14:58:13.257 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0", GenerateName:"calico-apiserver-5d949b5cc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3166bd9d-6937-48f1-bdd7-75be51da06f6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d949b5cc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8", Pod:"calico-apiserver-5d949b5cc6-wpvtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6d94250986", MAC:"1a:9f:f6:30:c8:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:13.277417 containerd[1592]: 2025-11-05 14:58:13.272 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" Namespace="calico-apiserver" Pod="calico-apiserver-5d949b5cc6-wpvtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d949b5cc6--wpvtb-eth0" Nov 5 14:58:13.277552 containerd[1592]: time="2025-11-05T14:58:13.276667370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-z27s7,Uid:4c1ded08-f27a-434b-a1c1-b9344d831e1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b18b84a558dbdb08b2fa0689f35a474d779b6688b5491092b37cd85aacaa5c4b\"" Nov 5 14:58:13.297923 containerd[1592]: time="2025-11-05T14:58:13.297882242Z" level=info msg="connecting to shim 147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8" address="unix:///run/containerd/s/cdb4de7e47262d6a9d512a933b59bbf61d3501e36508f0637e853cad065513a4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:13.322745 systemd[1]: Started cri-containerd-147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8.scope - libcontainer container 147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8. Nov 5 14:58:13.333689 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:13.353513 containerd[1592]: time="2025-11-05T14:58:13.353461349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d949b5cc6-wpvtb,Uid:3166bd9d-6937-48f1-bdd7-75be51da06f6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"147bc836265fe3d5dd7cc89cbf654a731462b8a484b5ed0cd1f1791644b1dca8\"" Nov 5 14:58:13.427202 containerd[1592]: time="2025-11-05T14:58:13.426987720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:13.428556 containerd[1592]: time="2025-11-05T14:58:13.428456802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 14:58:13.428729 containerd[1592]: time="2025-11-05T14:58:13.428523891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:13.429017 kubelet[2745]: E1105 14:58:13.428979 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 14:58:13.429301 kubelet[2745]: E1105 14:58:13.429028 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 14:58:13.429327 kubelet[2745]: E1105 14:58:13.429278 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f765m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-whbrw_calico-system(2321b872-0e6b-4e50-ac14-19211c4fd305): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:13.429422 containerd[1592]: time="2025-11-05T14:58:13.429372088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:58:13.430673 kubelet[2745]: E1105 14:58:13.430631 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-whbrw" podUID="2321b872-0e6b-4e50-ac14-19211c4fd305" Nov 5 14:58:13.718283 containerd[1592]: time="2025-11-05T14:58:13.718161802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:13.722199 containerd[1592]: time="2025-11-05T14:58:13.722152230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:58:13.722410 containerd[1592]: time="2025-11-05T14:58:13.722225360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:13.722464 kubelet[2745]: E1105 14:58:13.722423 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:13.722531 kubelet[2745]: E1105 14:58:13.722479 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:13.723159 kubelet[2745]: E1105 14:58:13.722725 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sql46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d949b5cc6-z27s7_calico-apiserver(4c1ded08-f27a-434b-a1c1-b9344d831e1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:13.723368 containerd[1592]: time="2025-11-05T14:58:13.722810000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:58:13.724311 kubelet[2745]: E1105 14:58:13.724272 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" podUID="4c1ded08-f27a-434b-a1c1-b9344d831e1e" Nov 5 14:58:13.881628 kubelet[2745]: E1105 14:58:13.881394 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:13.898832 containerd[1592]: time="2025-11-05T14:58:13.898775150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tkl5d,Uid:6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092,Namespace:kube-system,Attempt:0,}" Nov 5 14:58:13.972847 containerd[1592]: time="2025-11-05T14:58:13.972739861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:13.973907 containerd[1592]: time="2025-11-05T14:58:13.973868616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:58:13.973975 containerd[1592]: time="2025-11-05T14:58:13.973958548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:13.974166 kubelet[2745]: E1105 14:58:13.974125 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:13.974287 kubelet[2745]: E1105 14:58:13.974270 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:13.974619 kubelet[2745]: E1105 14:58:13.974525 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d949b5cc6-wpvtb_calico-apiserver(3166bd9d-6937-48f1-bdd7-75be51da06f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:13.976812 kubelet[2745]: E1105 14:58:13.976764 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" podUID="3166bd9d-6937-48f1-bdd7-75be51da06f6" Nov 5 14:58:14.007109 systemd-networkd[1487]: cali063b2a101bb: Link UP Nov 5 14:58:14.007563 systemd-networkd[1487]: cali063b2a101bb: Gained carrier Nov 5 14:58:14.023252 kubelet[2745]: E1105 14:58:14.022407 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-whbrw" podUID="2321b872-0e6b-4e50-ac14-19211c4fd305" Nov 5 14:58:14.023252 kubelet[2745]: E1105 14:58:14.022596 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" podUID="4c1ded08-f27a-434b-a1c1-b9344d831e1e" Nov 5 14:58:14.025735 containerd[1592]: 2025-11-05 14:58:13.938 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0 coredns-674b8bbfcf- kube-system 6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092 858 0 2025-11-05 14:57:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-tkl5d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali063b2a101bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-" Nov 5 14:58:14.025735 containerd[1592]: 2025-11-05 14:58:13.938 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.025735 containerd[1592]: 2025-11-05 14:58:13.962 [INFO][4576] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" HandleID="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Workload="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.962 [INFO][4576] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" HandleID="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Workload="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-tkl5d", "timestamp":"2025-11-05 14:58:13.962662718 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.962 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.962 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.962 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.972 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" host="localhost" Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.979 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.983 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.985 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.988 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:14.026006 containerd[1592]: 2025-11-05 14:58:13.988 [INFO][4576] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" host="localhost" Nov 5 14:58:14.026315 containerd[1592]: 2025-11-05 14:58:13.990 [INFO][4576] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd Nov 5 14:58:14.026315 containerd[1592]: 2025-11-05 14:58:13.993 [INFO][4576] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" host="localhost" Nov 5 14:58:14.026315 containerd[1592]: 2025-11-05 14:58:14.001 [INFO][4576] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" host="localhost" Nov 5 14:58:14.026315 containerd[1592]: 2025-11-05 14:58:14.001 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" host="localhost" Nov 5 14:58:14.026315 containerd[1592]: 2025-11-05 14:58:14.001 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:14.026315 containerd[1592]: 2025-11-05 14:58:14.001 [INFO][4576] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" HandleID="k8s-pod-network.55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Workload="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.026438 containerd[1592]: 2025-11-05 14:58:14.003 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-tkl5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali063b2a101bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:14.026493 containerd[1592]: 2025-11-05 14:58:14.003 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.026493 containerd[1592]: 2025-11-05 14:58:14.003 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali063b2a101bb ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.026493 containerd[1592]: 2025-11-05 14:58:14.007 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.026557 containerd[1592]: 2025-11-05 14:58:14.008 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd", Pod:"coredns-674b8bbfcf-tkl5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali063b2a101bb", MAC:"ca:86:d5:09:20:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:14.026557 containerd[1592]: 2025-11-05 14:58:14.019 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-tkl5d" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tkl5d-eth0" Nov 5 14:58:14.028227 kubelet[2745]: E1105 14:58:14.028163 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" podUID="3166bd9d-6937-48f1-bdd7-75be51da06f6" Nov 5 14:58:14.054456 containerd[1592]: time="2025-11-05T14:58:14.054413053Z" level=info msg="connecting to shim 55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd" address="unix:///run/containerd/s/3468084067c3bf1285a71e82f73717233bb445b62d53e98f828a911e9c5ab999" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:14.102797 systemd[1]: Started cri-containerd-55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd.scope - libcontainer container 55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd. Nov 5 14:58:14.114894 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:14.134491 containerd[1592]: time="2025-11-05T14:58:14.134451568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tkl5d,Uid:6ecbe89f-b7e4-4c8f-aaf0-455d2c1b6092,Namespace:kube-system,Attempt:0,} returns sandbox id \"55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd\"" Nov 5 14:58:14.135215 kubelet[2745]: E1105 14:58:14.135191 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:14.140218 containerd[1592]: time="2025-11-05T14:58:14.140091963Z" level=info msg="CreateContainer within sandbox \"55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 14:58:14.152412 containerd[1592]: time="2025-11-05T14:58:14.152372807Z" level=info msg="Container 12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:58:14.157468 containerd[1592]: time="2025-11-05T14:58:14.157431765Z" level=info msg="CreateContainer within sandbox \"55706dbfb59c4951c1cbd4cc3dd5dfa95401282c4d4998357dd05c705ce6e6bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9\"" Nov 5 14:58:14.157904 containerd[1592]: time="2025-11-05T14:58:14.157878184Z" level=info msg="StartContainer for \"12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9\"" Nov 5 14:58:14.159145 containerd[1592]: time="2025-11-05T14:58:14.159119190Z" level=info msg="connecting to shim 12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9" address="unix:///run/containerd/s/3468084067c3bf1285a71e82f73717233bb445b62d53e98f828a911e9c5ab999" protocol=ttrpc version=3 Nov 5 14:58:14.180760 systemd[1]: Started cri-containerd-12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9.scope - libcontainer container 12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9. Nov 5 14:58:14.207238 containerd[1592]: time="2025-11-05T14:58:14.206906708Z" level=info msg="StartContainer for \"12dc52886595c369ae86b9fe7882a3f6b6a9086ed4aebaa66e4566f368c0fec9\" returns successfully" Nov 5 14:58:14.809791 systemd-networkd[1487]: calif6d94250986: Gained IPv6LL Nov 5 14:58:14.873747 systemd-networkd[1487]: calia27af70c73f: Gained IPv6LL Nov 5 14:58:14.882437 containerd[1592]: time="2025-11-05T14:58:14.882261764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b884f7d-rtl7g,Uid:6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f,Namespace:calico-apiserver,Attempt:0,}" Nov 5 14:58:14.882437 containerd[1592]: time="2025-11-05T14:58:14.882312250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm97f,Uid:72dac233-4b2b-4265-b846-29435de8b196,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:15.013429 systemd-networkd[1487]: cali853481e5d6b: Link UP Nov 5 14:58:15.014420 systemd-networkd[1487]: cali853481e5d6b: Gained carrier Nov 5 14:58:15.032843 kubelet[2745]: E1105 14:58:15.032672 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:15.037827 kubelet[2745]: E1105 14:58:15.037607 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-whbrw" podUID="2321b872-0e6b-4e50-ac14-19211c4fd305" Nov 5 14:58:15.038711 kubelet[2745]: E1105 14:58:15.037898 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" podUID="3166bd9d-6937-48f1-bdd7-75be51da06f6" Nov 5 14:58:15.040068 kubelet[2745]: E1105 14:58:15.040027 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" podUID="4c1ded08-f27a-434b-a1c1-b9344d831e1e" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.933 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0 calico-apiserver-845b884f7d- calico-apiserver 6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f 863 0 2025-11-05 14:57:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845b884f7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-845b884f7d-rtl7g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali853481e5d6b [] [] }} ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.934 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.962 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" HandleID="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Workload="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.962 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" HandleID="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Workload="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-845b884f7d-rtl7g", "timestamp":"2025-11-05 14:58:14.962228389 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.962 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.962 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.962 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.975 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.981 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.987 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.990 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.993 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.993 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.994 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30 Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:14.999 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:15.006 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:15.006 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" host="localhost" Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:15.006 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:15.044068 containerd[1592]: 2025-11-05 14:58:15.007 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" HandleID="k8s-pod-network.d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Workload="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.045864 containerd[1592]: 2025-11-05 14:58:15.010 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0", GenerateName:"calico-apiserver-845b884f7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b884f7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-845b884f7d-rtl7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali853481e5d6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:15.045864 containerd[1592]: 2025-11-05 14:58:15.010 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.045864 containerd[1592]: 2025-11-05 14:58:15.010 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali853481e5d6b ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.045864 containerd[1592]: 2025-11-05 14:58:15.015 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.045864 containerd[1592]: 2025-11-05 14:58:15.015 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0", GenerateName:"calico-apiserver-845b884f7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b884f7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30", Pod:"calico-apiserver-845b884f7d-rtl7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali853481e5d6b", MAC:"9a:80:f6:87:92:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:15.045864 containerd[1592]: 2025-11-05 14:58:15.029 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" Namespace="calico-apiserver" Pod="calico-apiserver-845b884f7d-rtl7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--845b884f7d--rtl7g-eth0" Nov 5 14:58:15.080350 containerd[1592]: time="2025-11-05T14:58:15.079631861Z" level=info msg="connecting to shim d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30" address="unix:///run/containerd/s/3f024cbe6568af384c7c4f4795edf9bd8c1cfc623fdb02e315b2958ffd41cf01" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:15.090329 kubelet[2745]: I1105 14:58:15.090246 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tkl5d" podStartSLOduration=37.090219045 podStartE2EDuration="37.090219045s" podCreationTimestamp="2025-11-05 14:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:58:15.087953629 +0000 UTC m=+44.298508436" watchObservedRunningTime="2025-11-05 14:58:15.090219045 +0000 UTC m=+44.300773812" Nov 5 14:58:15.108248 systemd[1]: Started cri-containerd-d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30.scope - libcontainer container d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30. Nov 5 14:58:15.128148 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:15.130328 systemd-networkd[1487]: cali357219ea081: Link UP Nov 5 14:58:15.131067 systemd-networkd[1487]: cali357219ea081: Gained carrier Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:14.942 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tm97f-eth0 csi-node-driver- calico-system 72dac233-4b2b-4265-b846-29435de8b196 750 0 2025-11-05 14:57:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tm97f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali357219ea081 [] [] }} ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:14.942 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:14.976 [INFO][4716] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" HandleID="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Workload="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:14.976 [INFO][4716] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" HandleID="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Workload="localhost-k8s-csi--node--driver--tm97f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tm97f", "timestamp":"2025-11-05 14:58:14.976100447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:14.977 [INFO][4716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.006 [INFO][4716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.006 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.074 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.092 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.102 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.105 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.109 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.109 [INFO][4716] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.113 [INFO][4716] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.117 [INFO][4716] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.123 [INFO][4716] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.123 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" host="localhost" Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.124 [INFO][4716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:15.158749 containerd[1592]: 2025-11-05 14:58:15.124 [INFO][4716] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" HandleID="k8s-pod-network.482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Workload="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.159252 containerd[1592]: 2025-11-05 14:58:15.127 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tm97f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72dac233-4b2b-4265-b846-29435de8b196", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tm97f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali357219ea081", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:15.159252 containerd[1592]: 2025-11-05 14:58:15.127 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.159252 containerd[1592]: 2025-11-05 14:58:15.127 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali357219ea081 ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.159252 containerd[1592]: 2025-11-05 14:58:15.132 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.159252 containerd[1592]: 2025-11-05 14:58:15.134 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tm97f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72dac233-4b2b-4265-b846-29435de8b196", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa", Pod:"csi-node-driver-tm97f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali357219ea081", MAC:"36:eb:61:4f:61:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:15.159252 containerd[1592]: 2025-11-05 14:58:15.152 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" Namespace="calico-system" Pod="csi-node-driver-tm97f" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm97f-eth0" Nov 5 14:58:15.179447 containerd[1592]: time="2025-11-05T14:58:15.178937043Z" level=info msg="connecting to shim 482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa" address="unix:///run/containerd/s/bdb0d241f646264d2250bce6a163909462e93669c75b65f584d67acada58380e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:15.180191 containerd[1592]: time="2025-11-05T14:58:15.180165323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b884f7d-rtl7g,Uid:6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4f3c60c6de832967ea895b53bc22291817ca9c6a90921cef8a45638e89b1f30\"" Nov 5 14:58:15.183827 containerd[1592]: time="2025-11-05T14:58:15.183799238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:58:15.193757 systemd-networkd[1487]: cali85172cceab0: Gained IPv6LL Nov 5 14:58:15.209746 systemd[1]: Started cri-containerd-482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa.scope - libcontainer container 482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa. Nov 5 14:58:15.220145 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:15.231503 containerd[1592]: time="2025-11-05T14:58:15.231467950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm97f,Uid:72dac233-4b2b-4265-b846-29435de8b196,Namespace:calico-system,Attempt:0,} returns sandbox id \"482a3ca29662a41be58d7f47f5b2ef14fafe2bac4714d409bda9e56c574cfcfa\"" Nov 5 14:58:15.425892 containerd[1592]: time="2025-11-05T14:58:15.425757268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:15.427468 containerd[1592]: time="2025-11-05T14:58:15.427428686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:58:15.427567 containerd[1592]: time="2025-11-05T14:58:15.427506137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:15.427708 kubelet[2745]: E1105 14:58:15.427668 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:15.427750 kubelet[2745]: E1105 14:58:15.427721 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:15.428046 kubelet[2745]: E1105 14:58:15.427998 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r9j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-845b884f7d-rtl7g_calico-apiserver(6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:15.428147 containerd[1592]: time="2025-11-05T14:58:15.428043527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 14:58:15.429223 kubelet[2745]: E1105 14:58:15.429192 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" podUID="6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f" Nov 5 14:58:15.657787 containerd[1592]: time="2025-11-05T14:58:15.657628979Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:15.658759 containerd[1592]: time="2025-11-05T14:58:15.658648032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 14:58:15.658759 containerd[1592]: time="2025-11-05T14:58:15.658672676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 14:58:15.658961 kubelet[2745]: E1105 14:58:15.658891 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 14:58:15.658961 kubelet[2745]: E1105 14:58:15.658947 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 14:58:15.659125 kubelet[2745]: E1105 14:58:15.659067 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6w6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tm97f_calico-system(72dac233-4b2b-4265-b846-29435de8b196): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:15.661851 containerd[1592]: time="2025-11-05T14:58:15.661660746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 14:58:15.679787 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:58798.service - OpenSSH per-connection server daemon (10.0.0.1:58798). Nov 5 14:58:15.743167 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 58798 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:15.744531 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:15.750982 systemd-logind[1568]: New session 9 of user core. Nov 5 14:58:15.761786 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 14:58:15.881635 kubelet[2745]: E1105 14:58:15.881048 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:15.882761 containerd[1592]: time="2025-11-05T14:58:15.882162971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bn2pw,Uid:644366bd-9b19-452f-ae30-d5eda30abc3c,Namespace:kube-system,Attempt:0,}" Nov 5 14:58:15.887738 containerd[1592]: time="2025-11-05T14:58:15.887662210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4ff9db77-mfhq4,Uid:8e2929ec-365a-4dc4-8ec5-85de67c22423,Namespace:calico-system,Attempt:0,}" Nov 5 14:58:15.889316 containerd[1592]: time="2025-11-05T14:58:15.889264580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:15.899598 containerd[1592]: time="2025-11-05T14:58:15.899532802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 14:58:15.900536 containerd[1592]: time="2025-11-05T14:58:15.899634015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 14:58:15.903256 kubelet[2745]: E1105 14:58:15.899762 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 14:58:15.903407 kubelet[2745]: E1105 14:58:15.903382 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 14:58:15.911458 kubelet[2745]: E1105 14:58:15.903730 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6w6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tm97f_calico-system(72dac233-4b2b-4265-b846-29435de8b196): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:15.912721 kubelet[2745]: E1105 14:58:15.912682 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:15.961732 systemd-networkd[1487]: cali063b2a101bb: Gained IPv6LL Nov 5 14:58:16.034238 sshd[4843]: Connection closed by 10.0.0.1 port 58798 Nov 5 14:58:16.034839 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:16.038677 kubelet[2745]: E1105 14:58:16.038095 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:16.042808 kubelet[2745]: E1105 14:58:16.042109 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:16.042808 kubelet[2745]: E1105 14:58:16.042427 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" podUID="6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f" Nov 5 14:58:16.044136 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:58798.service: Deactivated successfully. Nov 5 14:58:16.047482 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 14:58:16.049336 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Nov 5 14:58:16.052269 systemd-logind[1568]: Removed session 9. Nov 5 14:58:16.072812 systemd-networkd[1487]: calic32b0e4631e: Link UP Nov 5 14:58:16.073411 systemd-networkd[1487]: calic32b0e4631e: Gained carrier Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.959 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0 coredns-674b8bbfcf- kube-system 644366bd-9b19-452f-ae30-d5eda30abc3c 859 0 2025-11-05 14:57:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bn2pw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic32b0e4631e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.959 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.994 [INFO][4880] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" HandleID="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Workload="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.994 [INFO][4880] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" HandleID="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Workload="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000510a80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bn2pw", "timestamp":"2025-11-05 14:58:15.994775852 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.994 [INFO][4880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.995 [INFO][4880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:15.995 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.012 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.019 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.029 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.036 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.041 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.041 [INFO][4880] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.045 [INFO][4880] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.050 [INFO][4880] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.062 [INFO][4880] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.062 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" host="localhost" Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.062 [INFO][4880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:16.095925 containerd[1592]: 2025-11-05 14:58:16.063 [INFO][4880] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" HandleID="k8s-pod-network.0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Workload="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.097015 containerd[1592]: 2025-11-05 14:58:16.067 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"644366bd-9b19-452f-ae30-d5eda30abc3c", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bn2pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic32b0e4631e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:16.097015 containerd[1592]: 2025-11-05 14:58:16.067 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.097015 containerd[1592]: 2025-11-05 14:58:16.067 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic32b0e4631e ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.097015 containerd[1592]: 2025-11-05 14:58:16.073 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.097015 containerd[1592]: 2025-11-05 14:58:16.074 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"644366bd-9b19-452f-ae30-d5eda30abc3c", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe", Pod:"coredns-674b8bbfcf-bn2pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic32b0e4631e", MAC:"72:23:74:c6:62:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:16.097015 containerd[1592]: 2025-11-05 14:58:16.092 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-bn2pw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bn2pw-eth0" Nov 5 14:58:16.145779 containerd[1592]: time="2025-11-05T14:58:16.145737640Z" level=info msg="connecting to shim 0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe" address="unix:///run/containerd/s/ac7659710e7b273a311b0877648bea6b15e9d45510db54b049d4bad8a6adf384" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:16.172923 systemd-networkd[1487]: cali86a7d856bbc: Link UP Nov 5 14:58:16.175233 systemd-networkd[1487]: cali86a7d856bbc: Gained carrier Nov 5 14:58:16.181820 systemd[1]: Started cri-containerd-0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe.scope - libcontainer container 0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe. Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:15.965 [INFO][4859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0 calico-kube-controllers-6f4ff9db77- calico-system 8e2929ec-365a-4dc4-8ec5-85de67c22423 865 0 2025-11-05 14:57:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f4ff9db77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f4ff9db77-mfhq4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali86a7d856bbc [] [] }} ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:15.967 [INFO][4859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.008 [INFO][4887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" HandleID="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Workload="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.008 [INFO][4887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" HandleID="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Workload="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f4ff9db77-mfhq4", "timestamp":"2025-11-05 14:58:16.008426377 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.008 [INFO][4887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.062 [INFO][4887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.062 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.112 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.129 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.137 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.139 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.142 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.142 [INFO][4887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.146 [INFO][4887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5 Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.156 [INFO][4887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.166 [INFO][4887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.166 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" host="localhost" Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.166 [INFO][4887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 14:58:16.191762 containerd[1592]: 2025-11-05 14:58:16.166 [INFO][4887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" HandleID="k8s-pod-network.6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Workload="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.192276 containerd[1592]: 2025-11-05 14:58:16.168 [INFO][4859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0", GenerateName:"calico-kube-controllers-6f4ff9db77-", Namespace:"calico-system", SelfLink:"", UID:"8e2929ec-365a-4dc4-8ec5-85de67c22423", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f4ff9db77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f4ff9db77-mfhq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86a7d856bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:16.192276 containerd[1592]: 2025-11-05 14:58:16.168 [INFO][4859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.192276 containerd[1592]: 2025-11-05 14:58:16.168 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86a7d856bbc ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.192276 containerd[1592]: 2025-11-05 14:58:16.175 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.192276 containerd[1592]: 2025-11-05 14:58:16.176 [INFO][4859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0", GenerateName:"calico-kube-controllers-6f4ff9db77-", Namespace:"calico-system", SelfLink:"", UID:"8e2929ec-365a-4dc4-8ec5-85de67c22423", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 14, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f4ff9db77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5", Pod:"calico-kube-controllers-6f4ff9db77-mfhq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86a7d856bbc", MAC:"42:29:0a:e1:df:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 14:58:16.192276 containerd[1592]: 2025-11-05 14:58:16.188 [INFO][4859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" Namespace="calico-system" Pod="calico-kube-controllers-6f4ff9db77-mfhq4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4ff9db77--mfhq4-eth0" Nov 5 14:58:16.198726 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:16.214627 containerd[1592]: time="2025-11-05T14:58:16.214491105Z" level=info msg="connecting to shim 6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5" address="unix:///run/containerd/s/db4a7c00e9d22ab54409140ae61d7fb7b04a8d2562dd6b806fd22035118706df" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:58:16.217791 systemd-networkd[1487]: cali853481e5d6b: Gained IPv6LL Nov 5 14:58:16.233509 containerd[1592]: time="2025-11-05T14:58:16.233389999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bn2pw,Uid:644366bd-9b19-452f-ae30-d5eda30abc3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe\"" Nov 5 14:58:16.234205 kubelet[2745]: E1105 14:58:16.234182 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:16.240417 containerd[1592]: time="2025-11-05T14:58:16.240366411Z" level=info msg="CreateContainer within sandbox \"0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 14:58:16.249928 systemd[1]: Started cri-containerd-6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5.scope - libcontainer container 6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5. Nov 5 14:58:16.257305 containerd[1592]: time="2025-11-05T14:58:16.257250808Z" level=info msg="Container 7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:58:16.264822 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:58:16.268126 containerd[1592]: time="2025-11-05T14:58:16.268065670Z" level=info msg="CreateContainer within sandbox \"0eabe5fc25d667ac6c715c235beb8edf17b162f419743ae8589b1657fb5d93fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7\"" Nov 5 14:58:16.269001 containerd[1592]: time="2025-11-05T14:58:16.268967625Z" level=info msg="StartContainer for \"7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7\"" Nov 5 14:58:16.271262 containerd[1592]: time="2025-11-05T14:58:16.271225434Z" level=info msg="connecting to shim 7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7" address="unix:///run/containerd/s/ac7659710e7b273a311b0877648bea6b15e9d45510db54b049d4bad8a6adf384" protocol=ttrpc version=3 Nov 5 14:58:16.299803 systemd[1]: Started cri-containerd-7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7.scope - libcontainer container 7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7. Nov 5 14:58:16.302591 containerd[1592]: time="2025-11-05T14:58:16.302454384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4ff9db77-mfhq4,Uid:8e2929ec-365a-4dc4-8ec5-85de67c22423,Namespace:calico-system,Attempt:0,} returns sandbox id \"6258752c6b69543ec0fb0a17da6fba604e144707fd4acf3ee7e5cca3c0240ab5\"" Nov 5 14:58:16.304794 containerd[1592]: time="2025-11-05T14:58:16.304755278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 14:58:16.339428 containerd[1592]: time="2025-11-05T14:58:16.339375621Z" level=info msg="StartContainer for \"7cbc48a5aa242f2b33cfebe892de56e29fc1832bc2ba955dd0dd91c1ee8216a7\" returns successfully" Nov 5 14:58:16.572900 containerd[1592]: time="2025-11-05T14:58:16.572807726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:16.573901 containerd[1592]: time="2025-11-05T14:58:16.573818935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 14:58:16.573901 containerd[1592]: time="2025-11-05T14:58:16.573864021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 14:58:16.574126 kubelet[2745]: E1105 14:58:16.574070 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 14:58:16.574126 kubelet[2745]: E1105 14:58:16.574122 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 14:58:16.575829 kubelet[2745]: E1105 14:58:16.575681 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp52f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f4ff9db77-mfhq4_calico-system(8e2929ec-365a-4dc4-8ec5-85de67c22423): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:16.577121 kubelet[2745]: E1105 14:58:16.577064 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" podUID="8e2929ec-365a-4dc4-8ec5-85de67c22423" Nov 5 14:58:16.857753 systemd-networkd[1487]: cali357219ea081: Gained IPv6LL Nov 5 14:58:17.041109 kubelet[2745]: E1105 14:58:17.041060 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" podUID="8e2929ec-365a-4dc4-8ec5-85de67c22423" Nov 5 14:58:17.042627 kubelet[2745]: E1105 14:58:17.042321 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:17.042627 kubelet[2745]: E1105 14:58:17.042537 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:17.042917 kubelet[2745]: E1105 14:58:17.042880 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:17.043647 kubelet[2745]: E1105 14:58:17.043609 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" podUID="6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f" Nov 5 14:58:17.241826 systemd-networkd[1487]: cali86a7d856bbc: Gained IPv6LL Nov 5 14:58:17.242517 systemd-networkd[1487]: calic32b0e4631e: Gained IPv6LL Nov 5 14:58:18.044609 kubelet[2745]: E1105 14:58:18.044482 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:18.045364 kubelet[2745]: E1105 14:58:18.045341 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:18.046018 kubelet[2745]: E1105 14:58:18.045759 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" podUID="8e2929ec-365a-4dc4-8ec5-85de67c22423" Nov 5 14:58:18.056271 kubelet[2745]: I1105 14:58:18.056022 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bn2pw" podStartSLOduration=40.056006839 podStartE2EDuration="40.056006839s" podCreationTimestamp="2025-11-05 14:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:58:17.115662725 +0000 UTC m=+46.326217532" watchObservedRunningTime="2025-11-05 14:58:18.056006839 +0000 UTC m=+47.266561846" Nov 5 14:58:19.046159 kubelet[2745]: E1105 14:58:19.046128 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:20.048742 kubelet[2745]: E1105 14:58:20.048632 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:21.051888 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:57052.service - OpenSSH per-connection server daemon (10.0.0.1:57052). Nov 5 14:58:21.120165 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 57052 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:21.122668 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:21.127030 systemd-logind[1568]: New session 10 of user core. Nov 5 14:58:21.138780 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 14:58:21.321920 sshd[5066]: Connection closed by 10.0.0.1 port 57052 Nov 5 14:58:21.323975 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:21.333025 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:57052.service: Deactivated successfully. Nov 5 14:58:21.336129 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 14:58:21.336926 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Nov 5 14:58:21.340037 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:57068.service - OpenSSH per-connection server daemon (10.0.0.1:57068). Nov 5 14:58:21.341617 systemd-logind[1568]: Removed session 10. Nov 5 14:58:21.403066 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 57068 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:21.404997 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:21.413716 systemd-logind[1568]: New session 11 of user core. Nov 5 14:58:21.422160 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 14:58:21.601877 sshd[5089]: Connection closed by 10.0.0.1 port 57068 Nov 5 14:58:21.603520 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:21.613744 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:57068.service: Deactivated successfully. Nov 5 14:58:21.617135 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 14:58:21.621088 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Nov 5 14:58:21.627517 systemd-logind[1568]: Removed session 11. Nov 5 14:58:21.630530 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:57072.service - OpenSSH per-connection server daemon (10.0.0.1:57072). Nov 5 14:58:21.692998 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 57072 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:21.694330 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:21.700908 systemd-logind[1568]: New session 12 of user core. Nov 5 14:58:21.706767 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 14:58:21.892288 sshd[5104]: Connection closed by 10.0.0.1 port 57072 Nov 5 14:58:21.892542 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:21.896831 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:57072.service: Deactivated successfully. Nov 5 14:58:21.898774 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 14:58:21.901068 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Nov 5 14:58:21.902246 systemd-logind[1568]: Removed session 12. Nov 5 14:58:23.883687 containerd[1592]: time="2025-11-05T14:58:23.883490390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 14:58:24.110760 containerd[1592]: time="2025-11-05T14:58:24.110700204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:24.111748 containerd[1592]: time="2025-11-05T14:58:24.111639947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 14:58:24.111748 containerd[1592]: time="2025-11-05T14:58:24.111711075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 14:58:24.111900 kubelet[2745]: E1105 14:58:24.111855 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 14:58:24.112195 kubelet[2745]: E1105 14:58:24.111911 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 14:58:24.112195 kubelet[2745]: E1105 14:58:24.112040 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ae650ae248af48029d0a2949d6c21df7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdmqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-745cc855db-vwx4d_calico-system(58296e15-e5c3-4a81-a7d0-43e43f184c2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:24.114208 containerd[1592]: time="2025-11-05T14:58:24.114136141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 14:58:24.362950 containerd[1592]: time="2025-11-05T14:58:24.362886553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:24.363928 containerd[1592]: time="2025-11-05T14:58:24.363856780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 14:58:24.363928 containerd[1592]: time="2025-11-05T14:58:24.363906225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 14:58:24.364212 kubelet[2745]: E1105 14:58:24.364173 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 14:58:24.364266 kubelet[2745]: E1105 14:58:24.364227 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 14:58:24.364408 kubelet[2745]: E1105 14:58:24.364352 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdmqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-745cc855db-vwx4d_calico-system(58296e15-e5c3-4a81-a7d0-43e43f184c2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:24.368194 kubelet[2745]: E1105 14:58:24.368143 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-745cc855db-vwx4d" podUID="58296e15-e5c3-4a81-a7d0-43e43f184c2f" Nov 5 14:58:26.904761 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:57078.service - OpenSSH per-connection server daemon (10.0.0.1:57078). Nov 5 14:58:26.951935 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 57078 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:26.953188 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:26.956962 systemd-logind[1568]: New session 13 of user core. Nov 5 14:58:26.964731 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 14:58:27.093932 sshd[5123]: Connection closed by 10.0.0.1 port 57078 Nov 5 14:58:27.094273 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:27.103596 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:57078.service: Deactivated successfully. Nov 5 14:58:27.105769 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 14:58:27.109211 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Nov 5 14:58:27.112618 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:57092.service - OpenSSH per-connection server daemon (10.0.0.1:57092). Nov 5 14:58:27.113490 systemd-logind[1568]: Removed session 13. Nov 5 14:58:27.168709 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 57092 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:27.170053 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:27.174471 systemd-logind[1568]: New session 14 of user core. Nov 5 14:58:27.186827 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 14:58:27.391635 sshd[5140]: Connection closed by 10.0.0.1 port 57092 Nov 5 14:58:27.392138 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:27.403963 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:57092.service: Deactivated successfully. Nov 5 14:58:27.406437 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 14:58:27.407210 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Nov 5 14:58:27.410208 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:57094.service - OpenSSH per-connection server daemon (10.0.0.1:57094). Nov 5 14:58:27.413665 systemd-logind[1568]: Removed session 14. Nov 5 14:58:27.484477 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 57094 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:27.486137 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:27.491063 systemd-logind[1568]: New session 15 of user core. Nov 5 14:58:27.496780 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 14:58:27.882607 containerd[1592]: time="2025-11-05T14:58:27.882542471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 14:58:28.149618 containerd[1592]: time="2025-11-05T14:58:28.149449432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:28.150621 containerd[1592]: time="2025-11-05T14:58:28.150531064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 14:58:28.150621 containerd[1592]: time="2025-11-05T14:58:28.150563067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:28.151058 kubelet[2745]: E1105 14:58:28.150815 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 14:58:28.151058 kubelet[2745]: E1105 14:58:28.150883 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 14:58:28.151394 kubelet[2745]: E1105 14:58:28.151294 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f765m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-whbrw_calico-system(2321b872-0e6b-4e50-ac14-19211c4fd305): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:28.151485 containerd[1592]: time="2025-11-05T14:58:28.151278702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:58:28.152782 kubelet[2745]: E1105 14:58:28.152676 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-whbrw" podUID="2321b872-0e6b-4e50-ac14-19211c4fd305" Nov 5 14:58:28.426169 containerd[1592]: time="2025-11-05T14:58:28.425949907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:28.428227 containerd[1592]: time="2025-11-05T14:58:28.428107451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:58:28.428227 containerd[1592]: time="2025-11-05T14:58:28.428125533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:28.428584 kubelet[2745]: E1105 14:58:28.428448 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:28.428678 kubelet[2745]: E1105 14:58:28.428598 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:28.429092 containerd[1592]: time="2025-11-05T14:58:28.428990943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:58:28.429337 kubelet[2745]: E1105 14:58:28.429235 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4r9j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-845b884f7d-rtl7g_calico-apiserver(6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:28.430751 kubelet[2745]: E1105 14:58:28.430643 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" podUID="6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f" Nov 5 14:58:28.496415 sshd[5154]: Connection closed by 10.0.0.1 port 57094 Nov 5 14:58:28.497347 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:28.509981 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:57094.service: Deactivated successfully. Nov 5 14:58:28.514158 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 14:58:28.517593 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Nov 5 14:58:28.521244 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:57098.service - OpenSSH per-connection server daemon (10.0.0.1:57098). Nov 5 14:58:28.523973 systemd-logind[1568]: Removed session 15. Nov 5 14:58:28.596440 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 57098 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:28.598267 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:28.602783 systemd-logind[1568]: New session 16 of user core. Nov 5 14:58:28.609754 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 14:58:28.736686 containerd[1592]: time="2025-11-05T14:58:28.736291777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:28.737930 containerd[1592]: time="2025-11-05T14:58:28.737886223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:58:28.737930 containerd[1592]: time="2025-11-05T14:58:28.737958791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:28.738223 kubelet[2745]: E1105 14:58:28.738182 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:28.738277 kubelet[2745]: E1105 14:58:28.738244 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:28.738437 kubelet[2745]: E1105 14:58:28.738395 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d949b5cc6-wpvtb_calico-apiserver(3166bd9d-6937-48f1-bdd7-75be51da06f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:28.739996 kubelet[2745]: E1105 14:58:28.739845 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" podUID="3166bd9d-6937-48f1-bdd7-75be51da06f6" Nov 5 14:58:28.884591 containerd[1592]: time="2025-11-05T14:58:28.882658978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 14:58:28.920606 sshd[5178]: Connection closed by 10.0.0.1 port 57098 Nov 5 14:58:28.920595 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:28.928008 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:57098.service: Deactivated successfully. Nov 5 14:58:28.931235 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 14:58:28.934673 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Nov 5 14:58:28.939070 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:57104.service - OpenSSH per-connection server daemon (10.0.0.1:57104). Nov 5 14:58:28.940115 systemd-logind[1568]: Removed session 16. Nov 5 14:58:29.006739 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 57104 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:29.008797 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:29.013003 systemd-logind[1568]: New session 17 of user core. Nov 5 14:58:29.022823 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 14:58:29.155691 containerd[1592]: time="2025-11-05T14:58:29.155632609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:29.156728 containerd[1592]: time="2025-11-05T14:58:29.156653674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 14:58:29.156834 containerd[1592]: time="2025-11-05T14:58:29.156733922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 14:58:29.157916 kubelet[2745]: E1105 14:58:29.157830 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:29.157916 kubelet[2745]: E1105 14:58:29.157896 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 14:58:29.158274 kubelet[2745]: E1105 14:58:29.158045 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sql46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d949b5cc6-z27s7_calico-apiserver(4c1ded08-f27a-434b-a1c1-b9344d831e1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:29.159650 kubelet[2745]: E1105 14:58:29.159566 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" podUID="4c1ded08-f27a-434b-a1c1-b9344d831e1e" Nov 5 14:58:29.170860 sshd[5198]: Connection closed by 10.0.0.1 port 57104 Nov 5 14:58:29.171750 sshd-session[5195]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:29.176511 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:57104.service: Deactivated successfully. Nov 5 14:58:29.178456 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 14:58:29.179662 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Nov 5 14:58:29.180867 systemd-logind[1568]: Removed session 17. Nov 5 14:58:29.881973 containerd[1592]: time="2025-11-05T14:58:29.881919149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 14:58:30.525213 containerd[1592]: time="2025-11-05T14:58:30.525164821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:30.526383 containerd[1592]: time="2025-11-05T14:58:30.526067953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 14:58:30.526383 containerd[1592]: time="2025-11-05T14:58:30.526138200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 14:58:30.527228 kubelet[2745]: E1105 14:58:30.526652 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 14:58:30.527228 kubelet[2745]: E1105 14:58:30.526710 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 14:58:30.527228 kubelet[2745]: E1105 14:58:30.526922 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp52f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6f4ff9db77-mfhq4_calico-system(8e2929ec-365a-4dc4-8ec5-85de67c22423): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:30.527671 containerd[1592]: time="2025-11-05T14:58:30.527443492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 14:58:30.528108 kubelet[2745]: E1105 14:58:30.528010 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" podUID="8e2929ec-365a-4dc4-8ec5-85de67c22423" Nov 5 14:58:30.859523 containerd[1592]: time="2025-11-05T14:58:30.859092554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:30.860303 containerd[1592]: time="2025-11-05T14:58:30.860270193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 14:58:30.860377 containerd[1592]: time="2025-11-05T14:58:30.860360963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 14:58:30.860556 kubelet[2745]: E1105 14:58:30.860521 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 14:58:30.860628 kubelet[2745]: E1105 14:58:30.860571 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 14:58:30.867514 kubelet[2745]: E1105 14:58:30.867381 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6w6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tm97f_calico-system(72dac233-4b2b-4265-b846-29435de8b196): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:30.869917 containerd[1592]: time="2025-11-05T14:58:30.869889248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 14:58:31.088355 containerd[1592]: time="2025-11-05T14:58:31.088303654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 14:58:31.089278 containerd[1592]: time="2025-11-05T14:58:31.089205824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 14:58:31.089324 containerd[1592]: time="2025-11-05T14:58:31.089245908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 14:58:31.089510 kubelet[2745]: E1105 14:58:31.089475 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 14:58:31.089794 kubelet[2745]: E1105 14:58:31.089616 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 14:58:31.089794 kubelet[2745]: E1105 14:58:31.089742 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6w6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tm97f_calico-system(72dac233-4b2b-4265-b846-29435de8b196): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 14:58:31.091464 kubelet[2745]: E1105 14:58:31.091429 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:34.187968 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:53586.service - OpenSSH per-connection server daemon (10.0.0.1:53586). Nov 5 14:58:34.258948 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 53586 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:34.261782 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:34.268259 systemd-logind[1568]: New session 18 of user core. Nov 5 14:58:34.273769 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 14:58:34.417408 sshd[5218]: Connection closed by 10.0.0.1 port 53586 Nov 5 14:58:34.417812 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:34.424674 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Nov 5 14:58:34.424864 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:53586.service: Deactivated successfully. Nov 5 14:58:34.427000 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 14:58:34.428830 systemd-logind[1568]: Removed session 18. Nov 5 14:58:38.882758 kubelet[2745]: E1105 14:58:38.882671 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-whbrw" podUID="2321b872-0e6b-4e50-ac14-19211c4fd305" Nov 5 14:58:38.883806 kubelet[2745]: E1105 14:58:38.883758 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-745cc855db-vwx4d" podUID="58296e15-e5c3-4a81-a7d0-43e43f184c2f" Nov 5 14:58:39.081758 containerd[1592]: time="2025-11-05T14:58:39.081694197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f25dbadf92fcf1631580615c838d02d6c7e650c5585966d50a1f3645ae313f6\" id:\"03d3e6477125cc5c487e1dff755cf9701da99b61e71e0b9747c9a41a3e516731\" pid:5248 exited_at:{seconds:1762354719 nanos:80928865}" Nov 5 14:58:39.083440 kubelet[2745]: E1105 14:58:39.083410 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:58:39.438136 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:57218.service - OpenSSH per-connection server daemon (10.0.0.1:57218). Nov 5 14:58:39.517033 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 57218 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:39.518456 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:39.522416 systemd-logind[1568]: New session 19 of user core. Nov 5 14:58:39.531751 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 14:58:39.683466 sshd[5264]: Connection closed by 10.0.0.1 port 57218 Nov 5 14:58:39.682132 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:39.689272 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:57218.service: Deactivated successfully. Nov 5 14:58:39.691693 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 14:58:39.693902 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Nov 5 14:58:39.695192 systemd-logind[1568]: Removed session 19. Nov 5 14:58:40.882503 kubelet[2745]: E1105 14:58:40.882458 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-wpvtb" podUID="3166bd9d-6937-48f1-bdd7-75be51da06f6" Nov 5 14:58:40.882941 kubelet[2745]: E1105 14:58:40.882568 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d949b5cc6-z27s7" podUID="4c1ded08-f27a-434b-a1c1-b9344d831e1e" Nov 5 14:58:40.882941 kubelet[2745]: E1105 14:58:40.882714 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-845b884f7d-rtl7g" podUID="6bb57cd3-dd1b-489b-86e0-4fd3b7b01f3f" Nov 5 14:58:42.882891 kubelet[2745]: E1105 14:58:42.882833 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tm97f" podUID="72dac233-4b2b-4265-b846-29435de8b196" Nov 5 14:58:43.882543 kubelet[2745]: E1105 14:58:43.882496 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f4ff9db77-mfhq4" podUID="8e2929ec-365a-4dc4-8ec5-85de67c22423" Nov 5 14:58:44.695802 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:57232.service - OpenSSH per-connection server daemon (10.0.0.1:57232). Nov 5 14:58:44.757709 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 57232 ssh2: RSA SHA256:UhT5f9wmCQdzEoNsOMgi3BTQyvbPzZOMnEl9uhE+rTc Nov 5 14:58:44.758881 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:44.762749 systemd-logind[1568]: New session 20 of user core. Nov 5 14:58:44.771057 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 14:58:44.921059 sshd[5283]: Connection closed by 10.0.0.1 port 57232 Nov 5 14:58:44.921382 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:44.925189 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:57232.service: Deactivated successfully. Nov 5 14:58:44.926815 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 14:58:44.928545 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Nov 5 14:58:44.929208 systemd-logind[1568]: Removed session 20.