Oct 31 13:34:00.260365 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 31 13:34:00.260390 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Oct 31 12:15:30 -00 2025 Oct 31 13:34:00.260398 kernel: KASLR enabled Oct 31 13:34:00.260404 kernel: efi: EFI v2.7 by EDK II Oct 31 13:34:00.260410 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 31 13:34:00.260416 kernel: random: crng init done Oct 31 13:34:00.260423 kernel: secureboot: Secure boot disabled Oct 31 13:34:00.260429 kernel: ACPI: Early table checksum verification disabled Oct 31 13:34:00.260437 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 31 13:34:00.260443 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 31 13:34:00.260450 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260456 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260462 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260468 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260478 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260484 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260491 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260497 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260504 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:34:00.260511 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 31 13:34:00.260517 kernel: ACPI: Use ACPI SPCR as default console: No Oct 31 13:34:00.260524 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 13:34:00.260531 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 31 13:34:00.260538 kernel: Zone ranges: Oct 31 13:34:00.260545 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 13:34:00.260551 kernel: DMA32 empty Oct 31 13:34:00.260557 kernel: Normal empty Oct 31 13:34:00.260564 kernel: Device empty Oct 31 13:34:00.260570 kernel: Movable zone start for each node Oct 31 13:34:00.260577 kernel: Early memory node ranges Oct 31 13:34:00.260584 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 31 13:34:00.260590 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 31 13:34:00.260638 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 31 13:34:00.260646 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 31 13:34:00.260655 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 31 13:34:00.260661 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 31 13:34:00.260668 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 31 13:34:00.260675 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 31 13:34:00.260681 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 31 13:34:00.260688 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 31 13:34:00.260698 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 31 13:34:00.260705 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 31 13:34:00.260712 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 31 13:34:00.260719 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 13:34:00.260726 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 31 13:34:00.260733 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 31 13:34:00.260740 kernel: psci: probing for conduit method from ACPI. Oct 31 13:34:00.260747 kernel: psci: PSCIv1.1 detected in firmware. Oct 31 13:34:00.260754 kernel: psci: Using standard PSCI v0.2 function IDs Oct 31 13:34:00.260769 kernel: psci: Trusted OS migration not required Oct 31 13:34:00.260776 kernel: psci: SMC Calling Convention v1.1 Oct 31 13:34:00.260783 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 31 13:34:00.260790 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 31 13:34:00.260798 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 31 13:34:00.260805 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 31 13:34:00.260812 kernel: Detected PIPT I-cache on CPU0 Oct 31 13:34:00.260819 kernel: CPU features: detected: GIC system register CPU interface Oct 31 13:34:00.260826 kernel: CPU features: detected: Spectre-v4 Oct 31 13:34:00.260833 kernel: CPU features: detected: Spectre-BHB Oct 31 13:34:00.260841 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 31 13:34:00.260848 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 31 13:34:00.260855 kernel: CPU features: detected: ARM erratum 1418040 Oct 31 13:34:00.260862 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 31 13:34:00.260870 kernel: alternatives: applying boot alternatives Oct 31 13:34:00.260878 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cc520f2d13274355d865d6b74d46b5152253502842541152122d42de9e5fecb2 Oct 31 13:34:00.260885 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 13:34:00.260892 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 13:34:00.260899 kernel: Fallback order for Node 0: 0 Oct 31 13:34:00.260907 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 31 13:34:00.260915 kernel: Policy zone: DMA Oct 31 13:34:00.260922 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 13:34:00.260929 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 31 13:34:00.260936 kernel: software IO TLB: area num 4. Oct 31 13:34:00.260943 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 31 13:34:00.260951 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 31 13:34:00.260958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 13:34:00.260965 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 13:34:00.260972 kernel: rcu: RCU event tracing is enabled. Oct 31 13:34:00.260980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 13:34:00.260987 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 13:34:00.260995 kernel: Tracing variant of Tasks RCU enabled. Oct 31 13:34:00.261002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 13:34:00.261009 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 13:34:00.261016 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 13:34:00.261023 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 13:34:00.261030 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 31 13:34:00.261037 kernel: GICv3: 256 SPIs implemented Oct 31 13:34:00.261044 kernel: GICv3: 0 Extended SPIs implemented Oct 31 13:34:00.261051 kernel: Root IRQ handler: gic_handle_irq Oct 31 13:34:00.261058 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 31 13:34:00.261065 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 31 13:34:00.261073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 31 13:34:00.261080 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 31 13:34:00.261087 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 31 13:34:00.261094 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 31 13:34:00.261101 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 31 13:34:00.261108 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 31 13:34:00.261115 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 13:34:00.261122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:34:00.261129 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 31 13:34:00.261136 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 31 13:34:00.261144 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 31 13:34:00.261152 kernel: arm-pv: using stolen time PV Oct 31 13:34:00.261160 kernel: Console: colour dummy device 80x25 Oct 31 13:34:00.261167 kernel: ACPI: Core revision 20240827 Oct 31 13:34:00.261175 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 31 13:34:00.261182 kernel: pid_max: default: 32768 minimum: 301 Oct 31 13:34:00.261190 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 13:34:00.261197 kernel: landlock: Up and running. Oct 31 13:34:00.261204 kernel: SELinux: Initializing. Oct 31 13:34:00.261213 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 13:34:00.261221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 13:34:00.261228 kernel: rcu: Hierarchical SRCU implementation. Oct 31 13:34:00.261236 kernel: rcu: Max phase no-delay instances is 400. Oct 31 13:34:00.261250 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 31 13:34:00.261321 kernel: Remapping and enabling EFI services. Oct 31 13:34:00.261333 kernel: smp: Bringing up secondary CPUs ... Oct 31 13:34:00.261344 kernel: Detected PIPT I-cache on CPU1 Oct 31 13:34:00.261356 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 31 13:34:00.261365 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 31 13:34:00.261373 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:34:00.261381 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 31 13:34:00.261388 kernel: Detected PIPT I-cache on CPU2 Oct 31 13:34:00.261396 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 31 13:34:00.261405 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 31 13:34:00.261413 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:34:00.261421 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 31 13:34:00.261428 kernel: Detected PIPT I-cache on CPU3 Oct 31 13:34:00.261436 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 31 13:34:00.261444 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 31 13:34:00.261452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:34:00.261475 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 31 13:34:00.261483 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 13:34:00.261490 kernel: SMP: Total of 4 processors activated. Oct 31 13:34:00.261516 kernel: CPU: All CPU(s) started at EL1 Oct 31 13:34:00.261524 kernel: CPU features: detected: 32-bit EL0 Support Oct 31 13:34:00.261531 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 31 13:34:00.261539 kernel: CPU features: detected: Common not Private translations Oct 31 13:34:00.261548 kernel: CPU features: detected: CRC32 instructions Oct 31 13:34:00.261556 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 31 13:34:00.261564 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 31 13:34:00.261571 kernel: CPU features: detected: LSE atomic instructions Oct 31 13:34:00.261579 kernel: CPU features: detected: Privileged Access Never Oct 31 13:34:00.261586 kernel: CPU features: detected: RAS Extension Support Oct 31 13:34:00.261594 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 31 13:34:00.261603 kernel: alternatives: applying system-wide alternatives Oct 31 13:34:00.261611 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 31 13:34:00.261619 kernel: Memory: 2451104K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12288K init, 1038K bss, 98848K reserved, 16384K cma-reserved) Oct 31 13:34:00.261627 kernel: devtmpfs: initialized Oct 31 13:34:00.261635 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 13:34:00.261642 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 13:34:00.261650 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 31 13:34:00.261659 kernel: 0 pages in range for non-PLT usage Oct 31 13:34:00.261667 kernel: 515232 pages in range for PLT usage Oct 31 13:34:00.261674 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 13:34:00.261682 kernel: SMBIOS 3.0.0 present. Oct 31 13:34:00.261689 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 31 13:34:00.261697 kernel: DMI: Memory slots populated: 1/1 Oct 31 13:34:00.261704 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 13:34:00.261712 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 31 13:34:00.261721 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 31 13:34:00.261729 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 31 13:34:00.261737 kernel: audit: initializing netlink subsys (disabled) Oct 31 13:34:00.261744 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 31 13:34:00.261752 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 13:34:00.261759 kernel: cpuidle: using governor menu Oct 31 13:34:00.261767 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 31 13:34:00.261776 kernel: ASID allocator initialised with 32768 entries Oct 31 13:34:00.261783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 13:34:00.261791 kernel: Serial: AMBA PL011 UART driver Oct 31 13:34:00.261799 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 13:34:00.261807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 13:34:00.261815 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 31 13:34:00.261823 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 31 13:34:00.261831 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 13:34:00.261839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 13:34:00.261847 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 31 13:34:00.261854 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 31 13:34:00.261862 kernel: ACPI: Added _OSI(Module Device) Oct 31 13:34:00.261869 kernel: ACPI: Added _OSI(Processor Device) Oct 31 13:34:00.261877 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 13:34:00.261885 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 13:34:00.261893 kernel: ACPI: Interpreter enabled Oct 31 13:34:00.261901 kernel: ACPI: Using GIC for interrupt routing Oct 31 13:34:00.261909 kernel: ACPI: MCFG table detected, 1 entries Oct 31 13:34:00.261916 kernel: ACPI: CPU0 has been hot-added Oct 31 13:34:00.261924 kernel: ACPI: CPU1 has been hot-added Oct 31 13:34:00.261931 kernel: ACPI: CPU2 has been hot-added Oct 31 13:34:00.261939 kernel: ACPI: CPU3 has been hot-added Oct 31 13:34:00.261947 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 31 13:34:00.261955 kernel: printk: legacy console [ttyAMA0] enabled Oct 31 13:34:00.261963 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 13:34:00.262122 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 13:34:00.262210 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 13:34:00.262422 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 13:34:00.262520 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 31 13:34:00.262603 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 31 13:34:00.262613 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 31 13:34:00.262621 kernel: PCI host bridge to bus 0000:00 Oct 31 13:34:00.262709 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 31 13:34:00.262785 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 31 13:34:00.262862 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 31 13:34:00.262934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 13:34:00.263034 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 31 13:34:00.263124 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 31 13:34:00.263211 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 31 13:34:00.263392 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 31 13:34:00.263485 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 13:34:00.263567 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 31 13:34:00.263649 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 31 13:34:00.263731 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 31 13:34:00.263808 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 31 13:34:00.263885 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 31 13:34:00.263956 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 31 13:34:00.263966 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 31 13:34:00.263974 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 31 13:34:00.263982 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 31 13:34:00.263989 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 31 13:34:00.263997 kernel: iommu: Default domain type: Translated Oct 31 13:34:00.264007 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 31 13:34:00.264014 kernel: efivars: Registered efivars operations Oct 31 13:34:00.264022 kernel: vgaarb: loaded Oct 31 13:34:00.264029 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 31 13:34:00.264037 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 13:34:00.264045 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 13:34:00.264052 kernel: pnp: PnP ACPI init Oct 31 13:34:00.264146 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 31 13:34:00.264157 kernel: pnp: PnP ACPI: found 1 devices Oct 31 13:34:00.264165 kernel: NET: Registered PF_INET protocol family Oct 31 13:34:00.264173 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 13:34:00.264181 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 13:34:00.264189 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 13:34:00.264197 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 13:34:00.264206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 13:34:00.264214 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 13:34:00.264222 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 13:34:00.264229 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 13:34:00.264237 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 13:34:00.264253 kernel: PCI: CLS 0 bytes, default 64 Oct 31 13:34:00.264306 kernel: kvm [1]: HYP mode not available Oct 31 13:34:00.264319 kernel: Initialise system trusted keyrings Oct 31 13:34:00.264326 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 13:34:00.264334 kernel: Key type asymmetric registered Oct 31 13:34:00.264342 kernel: Asymmetric key parser 'x509' registered Oct 31 13:34:00.264349 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 13:34:00.264357 kernel: io scheduler mq-deadline registered Oct 31 13:34:00.264365 kernel: io scheduler kyber registered Oct 31 13:34:00.264374 kernel: io scheduler bfq registered Oct 31 13:34:00.264382 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 31 13:34:00.264389 kernel: ACPI: button: Power Button [PWRB] Oct 31 13:34:00.264398 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 31 13:34:00.264499 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 31 13:34:00.264511 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 13:34:00.264519 kernel: thunder_xcv, ver 1.0 Oct 31 13:34:00.264529 kernel: thunder_bgx, ver 1.0 Oct 31 13:34:00.264537 kernel: nicpf, ver 1.0 Oct 31 13:34:00.264544 kernel: nicvf, ver 1.0 Oct 31 13:34:00.264635 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 31 13:34:00.264714 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-31T13:33:59 UTC (1761917639) Oct 31 13:34:00.264724 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 13:34:00.264734 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 31 13:34:00.264743 kernel: watchdog: NMI not fully supported Oct 31 13:34:00.264750 kernel: watchdog: Hard watchdog permanently disabled Oct 31 13:34:00.264758 kernel: NET: Registered PF_INET6 protocol family Oct 31 13:34:00.264766 kernel: Segment Routing with IPv6 Oct 31 13:34:00.264773 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 13:34:00.264781 kernel: NET: Registered PF_PACKET protocol family Oct 31 13:34:00.264790 kernel: Key type dns_resolver registered Oct 31 13:34:00.264797 kernel: registered taskstats version 1 Oct 31 13:34:00.264805 kernel: Loading compiled-in X.509 certificates Oct 31 13:34:00.264813 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 64cdd3ce1e781c447f31e2db38e6b9c169999a49' Oct 31 13:34:00.264820 kernel: Demotion targets for Node 0: null Oct 31 13:34:00.264828 kernel: Key type .fscrypt registered Oct 31 13:34:00.264835 kernel: Key type fscrypt-provisioning registered Oct 31 13:34:00.264843 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 13:34:00.264851 kernel: ima: Allocated hash algorithm: sha1 Oct 31 13:34:00.264859 kernel: ima: No architecture policies found Oct 31 13:34:00.264866 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 31 13:34:00.264874 kernel: clk: Disabling unused clocks Oct 31 13:34:00.264881 kernel: PM: genpd: Disabling unused power domains Oct 31 13:34:00.264889 kernel: Freeing unused kernel memory: 12288K Oct 31 13:34:00.264896 kernel: Run /init as init process Oct 31 13:34:00.264905 kernel: with arguments: Oct 31 13:34:00.264912 kernel: /init Oct 31 13:34:00.264920 kernel: with environment: Oct 31 13:34:00.264927 kernel: HOME=/ Oct 31 13:34:00.264935 kernel: TERM=linux Oct 31 13:34:00.265028 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 31 13:34:00.265106 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 31 13:34:00.265118 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 13:34:00.265126 kernel: GPT:16515071 != 27000831 Oct 31 13:34:00.265133 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 13:34:00.265141 kernel: GPT:16515071 != 27000831 Oct 31 13:34:00.265148 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 13:34:00.265155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 13:34:00.265164 kernel: SCSI subsystem initialized Oct 31 13:34:00.265172 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 13:34:00.265179 kernel: device-mapper: uevent: version 1.0.3 Oct 31 13:34:00.265187 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 13:34:00.265194 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 31 13:34:00.265202 kernel: raid6: neonx8 gen() 15612 MB/s Oct 31 13:34:00.265209 kernel: raid6: neonx4 gen() 15593 MB/s Oct 31 13:34:00.265218 kernel: raid6: neonx2 gen() 13082 MB/s Oct 31 13:34:00.265225 kernel: raid6: neonx1 gen() 10353 MB/s Oct 31 13:34:00.265233 kernel: raid6: int64x8 gen() 6732 MB/s Oct 31 13:34:00.265240 kernel: raid6: int64x4 gen() 7340 MB/s Oct 31 13:34:00.265313 kernel: raid6: int64x2 gen() 6080 MB/s Oct 31 13:34:00.265325 kernel: raid6: int64x1 gen() 5020 MB/s Oct 31 13:34:00.265333 kernel: raid6: using algorithm neonx8 gen() 15612 MB/s Oct 31 13:34:00.265344 kernel: raid6: .... xor() 11781 MB/s, rmw enabled Oct 31 13:34:00.265351 kernel: raid6: using neon recovery algorithm Oct 31 13:34:00.265359 kernel: xor: measuring software checksum speed Oct 31 13:34:00.265366 kernel: 8regs : 21590 MB/sec Oct 31 13:34:00.265374 kernel: 32regs : 19276 MB/sec Oct 31 13:34:00.265382 kernel: arm64_neon : 28089 MB/sec Oct 31 13:34:00.265390 kernel: xor: using function: arm64_neon (28089 MB/sec) Oct 31 13:34:00.265398 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 13:34:00.265406 kernel: BTRFS: device fsid 2e48a6cc-4be7-468d-abbe-613184ca2d09 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (207) Oct 31 13:34:00.265414 kernel: BTRFS info (device dm-0): first mount of filesystem 2e48a6cc-4be7-468d-abbe-613184ca2d09 Oct 31 13:34:00.265422 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:34:00.265430 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 13:34:00.265438 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 13:34:00.265445 kernel: loop: module loaded Oct 31 13:34:00.265454 kernel: loop0: detected capacity change from 0 to 91464 Oct 31 13:34:00.265462 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 13:34:00.265471 systemd[1]: Successfully made /usr/ read-only. Oct 31 13:34:00.265482 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 13:34:00.265491 systemd[1]: Detected virtualization kvm. Oct 31 13:34:00.265498 systemd[1]: Detected architecture arm64. Oct 31 13:34:00.265507 systemd[1]: Running in initrd. Oct 31 13:34:00.265515 systemd[1]: No hostname configured, using default hostname. Oct 31 13:34:00.265523 systemd[1]: Hostname set to . Oct 31 13:34:00.265531 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 13:34:00.265539 systemd[1]: Queued start job for default target initrd.target. Oct 31 13:34:00.265547 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 13:34:00.265555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 13:34:00.265565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 13:34:00.265574 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 13:34:00.265583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 13:34:00.265591 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 13:34:00.265600 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 13:34:00.265609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 13:34:00.265617 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 13:34:00.265625 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 13:34:00.265634 systemd[1]: Reached target paths.target - Path Units. Oct 31 13:34:00.265642 systemd[1]: Reached target slices.target - Slice Units. Oct 31 13:34:00.265651 systemd[1]: Reached target swap.target - Swaps. Oct 31 13:34:00.265659 systemd[1]: Reached target timers.target - Timer Units. Oct 31 13:34:00.265668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 13:34:00.265677 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 13:34:00.265686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 13:34:00.265694 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 13:34:00.265709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 13:34:00.265720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 13:34:00.265729 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 13:34:00.265737 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 13:34:00.265746 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 13:34:00.265755 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 13:34:00.265763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 13:34:00.265771 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 13:34:00.265782 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 13:34:00.265791 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 13:34:00.265799 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 13:34:00.265808 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 13:34:00.265816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:34:00.265826 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 13:34:00.265835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 13:34:00.265844 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 13:34:00.265852 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 13:34:00.265861 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 13:34:00.265870 kernel: Bridge firewalling registered Oct 31 13:34:00.265878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 13:34:00.265908 systemd-journald[345]: Collecting audit messages is disabled. Oct 31 13:34:00.265929 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 13:34:00.265939 systemd-journald[345]: Journal started Oct 31 13:34:00.265957 systemd-journald[345]: Runtime Journal (/run/log/journal/c18ba874891640d28e12711460aba85c) is 6M, max 48.5M, 42.4M free. Oct 31 13:34:00.254033 systemd-modules-load[346]: Inserted module 'br_netfilter' Oct 31 13:34:00.273213 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 13:34:00.275394 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 13:34:00.275974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:34:00.278657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 13:34:00.282788 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 13:34:00.285636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 13:34:00.288482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 13:34:00.296384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 13:34:00.304436 systemd-tmpfiles[373]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 13:34:00.306114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 13:34:00.309021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 13:34:00.312237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 13:34:00.314834 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 13:34:00.333252 systemd-resolved[370]: Positive Trust Anchors: Oct 31 13:34:00.333324 systemd-resolved[370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 13:34:00.333328 systemd-resolved[370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 13:34:00.333359 systemd-resolved[370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 13:34:00.345347 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cc520f2d13274355d865d6b74d46b5152253502842541152122d42de9e5fecb2 Oct 31 13:34:00.355611 systemd-resolved[370]: Defaulting to hostname 'linux'. Oct 31 13:34:00.356575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 13:34:00.358852 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 13:34:00.411305 kernel: Loading iSCSI transport class v2.0-870. Oct 31 13:34:00.419305 kernel: iscsi: registered transport (tcp) Oct 31 13:34:00.432467 kernel: iscsi: registered transport (qla4xxx) Oct 31 13:34:00.432540 kernel: QLogic iSCSI HBA Driver Oct 31 13:34:00.453049 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 13:34:00.469415 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 13:34:00.472914 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 13:34:00.516432 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 13:34:00.518852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 13:34:00.520634 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 13:34:00.569188 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 13:34:00.573688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 13:34:00.599182 systemd-udevd[632]: Using default interface naming scheme 'v257'. Oct 31 13:34:00.606954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 13:34:00.610763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 13:34:00.632822 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 13:34:00.635449 dracut-pre-trigger[704]: rd.md=0: removing MD RAID activation Oct 31 13:34:00.635854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 13:34:00.659793 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 13:34:00.661969 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 13:34:00.678684 systemd-networkd[743]: lo: Link UP Oct 31 13:34:00.678693 systemd-networkd[743]: lo: Gained carrier Oct 31 13:34:00.679145 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 13:34:00.680571 systemd[1]: Reached target network.target - Network. Oct 31 13:34:00.715068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 13:34:00.717467 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 13:34:00.755427 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 13:34:00.774268 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 13:34:00.783909 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 13:34:00.792927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 13:34:00.796407 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 13:34:00.802501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 13:34:00.802622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:34:00.804956 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:34:00.809964 systemd-networkd[743]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:34:00.809982 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 13:34:00.810224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:34:00.810994 systemd-networkd[743]: eth0: Link UP Oct 31 13:34:00.817675 disk-uuid[806]: Primary Header is updated. Oct 31 13:34:00.817675 disk-uuid[806]: Secondary Entries is updated. Oct 31 13:34:00.817675 disk-uuid[806]: Secondary Header is updated. Oct 31 13:34:00.811137 systemd-networkd[743]: eth0: Gained carrier Oct 31 13:34:00.811147 systemd-networkd[743]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:34:00.825383 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 13:34:00.842594 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:34:00.885994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 13:34:00.887704 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 13:34:00.889429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 13:34:00.891635 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 13:34:00.894596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 13:34:00.912599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 13:34:01.851666 disk-uuid[808]: Warning: The kernel is still using the old partition table. Oct 31 13:34:01.851666 disk-uuid[808]: The new table will be used at the next reboot or after you Oct 31 13:34:01.851666 disk-uuid[808]: run partprobe(8) or kpartx(8) Oct 31 13:34:01.851666 disk-uuid[808]: The operation has completed successfully. Oct 31 13:34:01.856721 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 13:34:01.856832 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 13:34:01.859070 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 13:34:01.888286 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Oct 31 13:34:01.890528 kernel: BTRFS info (device vda6): first mount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:34:01.890559 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:34:01.893282 kernel: BTRFS info (device vda6): turning on async discard Oct 31 13:34:01.893306 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 13:34:01.899361 kernel: BTRFS info (device vda6): last unmount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:34:01.900340 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 13:34:01.902164 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 13:34:02.002579 ignition[857]: Ignition 2.22.0 Oct 31 13:34:02.002592 ignition[857]: Stage: fetch-offline Oct 31 13:34:02.002631 ignition[857]: no configs at "/usr/lib/ignition/base.d" Oct 31 13:34:02.002641 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:34:02.002788 ignition[857]: parsed url from cmdline: "" Oct 31 13:34:02.002790 ignition[857]: no config URL provided Oct 31 13:34:02.002795 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 13:34:02.002802 ignition[857]: no config at "/usr/lib/ignition/user.ign" Oct 31 13:34:02.002841 ignition[857]: op(1): [started] loading QEMU firmware config module Oct 31 13:34:02.002845 ignition[857]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 13:34:02.009247 ignition[857]: op(1): [finished] loading QEMU firmware config module Oct 31 13:34:02.009275 ignition[857]: QEMU firmware config was not found. Ignoring... Oct 31 13:34:02.053777 ignition[857]: parsing config with SHA512: 21e6d3ff1df2d9175647974bb81c4b6fe1f9bf46ae31874c3e099d9df9394a34481d299bfd461d5869280f87646f3baf6517a1e8681abc236e0ba84c8b0cfeb4 Oct 31 13:34:02.057785 unknown[857]: fetched base config from "system" Oct 31 13:34:02.057807 unknown[857]: fetched user config from "qemu" Oct 31 13:34:02.058230 ignition[857]: fetch-offline: fetch-offline passed Oct 31 13:34:02.059927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 13:34:02.058324 ignition[857]: Ignition finished successfully Oct 31 13:34:02.062577 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 13:34:02.067107 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 13:34:02.097917 ignition[873]: Ignition 2.22.0 Oct 31 13:34:02.097932 ignition[873]: Stage: kargs Oct 31 13:34:02.098059 ignition[873]: no configs at "/usr/lib/ignition/base.d" Oct 31 13:34:02.098067 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:34:02.098816 ignition[873]: kargs: kargs passed Oct 31 13:34:02.098860 ignition[873]: Ignition finished successfully Oct 31 13:34:02.102633 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 13:34:02.104876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 13:34:02.130111 ignition[881]: Ignition 2.22.0 Oct 31 13:34:02.130127 ignition[881]: Stage: disks Oct 31 13:34:02.130292 ignition[881]: no configs at "/usr/lib/ignition/base.d" Oct 31 13:34:02.134611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 13:34:02.130301 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:34:02.135891 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 13:34:02.131058 ignition[881]: disks: disks passed Oct 31 13:34:02.137740 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 13:34:02.131100 ignition[881]: Ignition finished successfully Oct 31 13:34:02.140049 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 13:34:02.142047 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 13:34:02.143660 systemd[1]: Reached target basic.target - Basic System. Oct 31 13:34:02.146513 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 13:34:02.181499 systemd-fsck[891]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 31 13:34:02.187577 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 13:34:02.190183 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 13:34:02.257288 kernel: EXT4-fs (vda9): mounted filesystem 921f74fb-be87-4ddd-b9ea-687813833434 r/w with ordered data mode. Quota mode: none. Oct 31 13:34:02.257321 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 13:34:02.258603 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 13:34:02.261359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 13:34:02.262982 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 13:34:02.264142 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 13:34:02.264173 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 13:34:02.264197 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 13:34:02.272552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 13:34:02.274561 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 13:34:02.279272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (899) Oct 31 13:34:02.281672 kernel: BTRFS info (device vda6): first mount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:34:02.281700 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:34:02.284548 kernel: BTRFS info (device vda6): turning on async discard Oct 31 13:34:02.284575 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 13:34:02.285525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 13:34:02.311498 systemd-networkd[743]: eth0: Gained IPv6LL Oct 31 13:34:02.314641 initrd-setup-root[923]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 13:34:02.318990 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Oct 31 13:34:02.323157 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 13:34:02.327016 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 13:34:02.392117 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 13:34:02.394627 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 13:34:02.396256 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 13:34:02.422075 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 13:34:02.423815 kernel: BTRFS info (device vda6): last unmount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:34:02.441454 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 13:34:02.456391 ignition[1014]: INFO : Ignition 2.22.0 Oct 31 13:34:02.456391 ignition[1014]: INFO : Stage: mount Oct 31 13:34:02.458054 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 13:34:02.458054 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:34:02.458054 ignition[1014]: INFO : mount: mount passed Oct 31 13:34:02.458054 ignition[1014]: INFO : Ignition finished successfully Oct 31 13:34:02.458992 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 13:34:02.461119 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 13:34:03.259011 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 13:34:03.279272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Oct 31 13:34:03.279315 kernel: BTRFS info (device vda6): first mount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:34:03.279327 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:34:03.284799 kernel: BTRFS info (device vda6): turning on async discard Oct 31 13:34:03.284824 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 13:34:03.286154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 13:34:03.320049 ignition[1042]: INFO : Ignition 2.22.0 Oct 31 13:34:03.320049 ignition[1042]: INFO : Stage: files Oct 31 13:34:03.321870 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 13:34:03.321870 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:34:03.321870 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Oct 31 13:34:03.325664 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 13:34:03.325664 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 13:34:03.325664 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 13:34:03.325664 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 13:34:03.325664 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 13:34:03.325026 unknown[1042]: wrote ssh authorized keys file for user: core Oct 31 13:34:03.333654 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 31 13:34:03.333654 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 31 13:34:04.380088 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 13:34:04.487908 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 31 13:34:04.487908 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 13:34:04.492044 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 13:34:04.505124 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 13:34:04.505124 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 13:34:04.505124 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 31 13:34:04.505124 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 31 13:34:04.513657 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 31 13:34:04.513657 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Oct 31 13:34:05.004485 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 13:34:05.653568 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 31 13:34:05.653568 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 13:34:05.658318 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 13:34:05.662796 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 13:34:05.662796 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 13:34:05.662796 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 13:34:05.662796 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 13:34:05.670491 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 13:34:05.670491 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 13:34:05.670491 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 13:34:05.688466 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 13:34:05.691863 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 13:34:05.694393 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 13:34:05.694393 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 13:34:05.694393 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 13:34:05.694393 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 13:34:05.694393 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 13:34:05.694393 ignition[1042]: INFO : files: files passed Oct 31 13:34:05.694393 ignition[1042]: INFO : Ignition finished successfully Oct 31 13:34:05.695097 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 13:34:05.698173 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 13:34:05.700369 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 13:34:05.717435 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 13:34:05.717566 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 13:34:05.720695 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 13:34:05.722653 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 13:34:05.722653 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 13:34:05.726085 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 13:34:05.725256 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 13:34:05.727776 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 13:34:05.730564 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 13:34:05.773415 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 13:34:05.773559 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 13:34:05.775837 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 13:34:05.777843 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 13:34:05.779917 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 13:34:05.780850 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 13:34:05.816836 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 13:34:05.819562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 13:34:05.840478 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 13:34:05.840721 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 13:34:05.843024 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 13:34:05.845352 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 13:34:05.847303 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 13:34:05.847431 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 13:34:05.850273 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 13:34:05.851485 systemd[1]: Stopped target basic.target - Basic System. Oct 31 13:34:05.853468 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 13:34:05.855553 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 13:34:05.857577 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 13:34:05.859649 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 13:34:05.861794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 13:34:05.863818 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 13:34:05.866054 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 13:34:05.868043 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 13:34:05.870239 systemd[1]: Stopped target swap.target - Swaps. Oct 31 13:34:05.871995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 13:34:05.872115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 13:34:05.874729 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 13:34:05.876951 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 13:34:05.879152 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 13:34:05.882333 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 13:34:05.883684 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 13:34:05.883808 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 13:34:05.886985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 13:34:05.887113 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 13:34:05.889399 systemd[1]: Stopped target paths.target - Path Units. Oct 31 13:34:05.891218 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 13:34:05.892394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 13:34:05.894640 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 13:34:05.896614 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 13:34:05.899045 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 13:34:05.899141 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 13:34:05.900866 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 13:34:05.900952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 13:34:05.902802 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 13:34:05.902970 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 13:34:05.904682 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 13:34:05.904790 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 13:34:05.907310 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 13:34:05.909915 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 13:34:05.911180 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 13:34:05.911328 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 13:34:05.913753 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 13:34:05.913861 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 13:34:05.915730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 13:34:05.915837 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 13:34:05.921562 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 13:34:05.930505 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 13:34:05.939901 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 13:34:05.944744 ignition[1099]: INFO : Ignition 2.22.0 Oct 31 13:34:05.944744 ignition[1099]: INFO : Stage: umount Oct 31 13:34:05.946910 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 13:34:05.946910 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:34:05.946910 ignition[1099]: INFO : umount: umount passed Oct 31 13:34:05.946910 ignition[1099]: INFO : Ignition finished successfully Oct 31 13:34:05.947525 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 13:34:05.947647 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 13:34:05.950509 systemd[1]: Stopped target network.target - Network. Oct 31 13:34:05.951537 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 13:34:05.951592 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 13:34:05.953389 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 13:34:05.953442 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 13:34:05.955368 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 13:34:05.955418 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 13:34:05.957456 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 13:34:05.957501 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 13:34:05.959712 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 13:34:05.961711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 13:34:05.967015 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 13:34:05.967131 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 13:34:05.970913 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 13:34:05.973039 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 13:34:05.973076 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 13:34:05.976961 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 13:34:05.979092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 13:34:05.979163 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 13:34:05.981875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 13:34:05.985368 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 13:34:05.990561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 13:34:05.995009 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 13:34:05.995146 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 13:34:05.996883 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 13:34:05.997643 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 13:34:06.000507 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 13:34:06.000557 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 13:34:06.002084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 13:34:06.002117 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 13:34:06.003956 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 13:34:06.004010 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 13:34:06.006732 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 13:34:06.006781 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 13:34:06.009646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 13:34:06.009698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 13:34:06.012737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 13:34:06.012791 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 13:34:06.014737 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 13:34:06.015981 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 13:34:06.016042 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 13:34:06.018102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 13:34:06.018152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 13:34:06.019971 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 13:34:06.020015 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 13:34:06.021907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 13:34:06.021955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 13:34:06.024353 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 13:34:06.024402 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 13:34:06.026458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 13:34:06.026504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 13:34:06.028572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 13:34:06.028620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:34:06.031331 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 13:34:06.036424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 13:34:06.041972 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 13:34:06.042066 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 13:34:06.044543 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 13:34:06.046293 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 13:34:06.067025 systemd[1]: Switching root. Oct 31 13:34:06.111595 systemd-journald[345]: Journal stopped Oct 31 13:34:06.874846 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Oct 31 13:34:06.874900 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 13:34:06.874912 kernel: SELinux: policy capability open_perms=1 Oct 31 13:34:06.874922 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 13:34:06.874933 kernel: SELinux: policy capability always_check_network=0 Oct 31 13:34:06.874944 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 13:34:06.874958 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 13:34:06.874968 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 13:34:06.874977 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 13:34:06.874987 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 13:34:06.874999 systemd[1]: Successfully loaded SELinux policy in 60.895ms. Oct 31 13:34:06.875018 kernel: audit: type=1403 audit(1761917646.305:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 13:34:06.875029 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.289ms. Oct 31 13:34:06.875057 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 13:34:06.875070 systemd[1]: Detected virtualization kvm. Oct 31 13:34:06.875081 systemd[1]: Detected architecture arm64. Oct 31 13:34:06.875091 systemd[1]: Detected first boot. Oct 31 13:34:06.875103 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 13:34:06.875113 zram_generator::config[1144]: No configuration found. Oct 31 13:34:06.875127 kernel: NET: Registered PF_VSOCK protocol family Oct 31 13:34:06.875138 systemd[1]: Populated /etc with preset unit settings. Oct 31 13:34:06.875149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 13:34:06.875160 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 13:34:06.875172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 13:34:06.875184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 13:34:06.875196 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 13:34:06.875208 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 13:34:06.875227 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 13:34:06.875241 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 13:34:06.875253 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 13:34:06.875328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 13:34:06.875340 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 13:34:06.875352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 13:34:06.875363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 13:34:06.875374 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 13:34:06.875385 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 13:34:06.875397 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 13:34:06.875409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 13:34:06.875420 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 31 13:34:06.875431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 13:34:06.875442 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 13:34:06.875453 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 13:34:06.875465 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 13:34:06.875477 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 13:34:06.875492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 13:34:06.875502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 13:34:06.875514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 13:34:06.875525 systemd[1]: Reached target slices.target - Slice Units. Oct 31 13:34:06.875535 systemd[1]: Reached target swap.target - Swaps. Oct 31 13:34:06.875546 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 13:34:06.875558 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 13:34:06.875570 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 13:34:06.875582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 13:34:06.875593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 13:34:06.875603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 13:34:06.875634 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 13:34:06.875647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 13:34:06.875659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 13:34:06.875673 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 13:34:06.875684 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 13:34:06.875695 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 13:34:06.875706 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 13:34:06.875717 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 13:34:06.875728 systemd[1]: Reached target machines.target - Containers. Oct 31 13:34:06.875739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 13:34:06.875753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:34:06.875763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 13:34:06.875774 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 13:34:06.875785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 13:34:06.875795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 13:34:06.875806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 13:34:06.875819 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 13:34:06.875830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 13:34:06.875840 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 13:34:06.875851 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 13:34:06.875861 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 13:34:06.875872 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 13:34:06.875883 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 13:34:06.875895 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:34:06.875906 kernel: fuse: init (API version 7.41) Oct 31 13:34:06.875916 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 13:34:06.875926 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 13:34:06.875937 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 13:34:06.875948 kernel: ACPI: bus type drm_connector registered Oct 31 13:34:06.875959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 13:34:06.875971 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 13:34:06.875982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 13:34:06.875993 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 13:34:06.876004 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 13:34:06.876032 systemd-journald[1223]: Collecting audit messages is disabled. Oct 31 13:34:06.876054 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 13:34:06.876065 systemd-journald[1223]: Journal started Oct 31 13:34:06.876085 systemd-journald[1223]: Runtime Journal (/run/log/journal/c18ba874891640d28e12711460aba85c) is 6M, max 48.5M, 42.4M free. Oct 31 13:34:06.655862 systemd[1]: Queued start job for default target multi-user.target. Oct 31 13:34:06.680127 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 13:34:06.680566 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 13:34:06.878794 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 13:34:06.879778 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 13:34:06.880965 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 13:34:06.882167 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 13:34:06.883440 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 13:34:06.884787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 13:34:06.887617 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 13:34:06.887806 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 13:34:06.889144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 13:34:06.889363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 13:34:06.890585 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 13:34:06.890749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 13:34:06.891995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 13:34:06.892154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 13:34:06.893818 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 13:34:06.893975 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 13:34:06.895377 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 13:34:06.895520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 13:34:06.896817 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 13:34:06.898274 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 13:34:06.900195 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 13:34:06.902165 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 13:34:06.915283 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 13:34:06.916998 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 13:34:06.919420 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 13:34:06.921374 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 13:34:06.922593 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 13:34:06.922632 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 13:34:06.924607 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 13:34:06.926241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:34:06.928601 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 13:34:06.930551 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 13:34:06.931861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 13:34:06.932712 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 13:34:06.933943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 13:34:06.938349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 13:34:06.940341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 13:34:06.942502 systemd-journald[1223]: Time spent on flushing to /var/log/journal/c18ba874891640d28e12711460aba85c is 17.044ms for 872 entries. Oct 31 13:34:06.942502 systemd-journald[1223]: System Journal (/var/log/journal/c18ba874891640d28e12711460aba85c) is 8M, max 163.5M, 155.5M free. Oct 31 13:34:06.968332 systemd-journald[1223]: Received client request to flush runtime journal. Oct 31 13:34:06.968385 kernel: loop1: detected capacity change from 0 to 119400 Oct 31 13:34:06.942429 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 13:34:06.947313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 13:34:06.949137 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 13:34:06.951356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 13:34:06.952689 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 13:34:06.956506 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 13:34:06.959108 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 13:34:06.969681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 13:34:06.971985 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 13:34:06.972896 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 31 13:34:06.972907 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 31 13:34:06.976042 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 13:34:06.980312 kernel: loop2: detected capacity change from 0 to 100192 Oct 31 13:34:06.979110 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 13:34:06.995461 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 13:34:07.005291 kernel: loop3: detected capacity change from 0 to 211168 Oct 31 13:34:07.017424 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 13:34:07.020240 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 13:34:07.022256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 13:34:07.033373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 13:34:07.038272 kernel: loop4: detected capacity change from 0 to 119400 Oct 31 13:34:07.040868 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 31 13:34:07.040890 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 31 13:34:07.041275 kernel: loop5: detected capacity change from 0 to 100192 Oct 31 13:34:07.045153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 13:34:07.049285 kernel: loop6: detected capacity change from 0 to 211168 Oct 31 13:34:07.053865 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 31 13:34:07.056449 (sd-merge)[1284]: Merged extensions into '/usr'. Oct 31 13:34:07.059911 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 13:34:07.059932 systemd[1]: Reloading... Oct 31 13:34:07.119645 zram_generator::config[1323]: No configuration found. Oct 31 13:34:07.134817 systemd-resolved[1282]: Positive Trust Anchors: Oct 31 13:34:07.134834 systemd-resolved[1282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 13:34:07.134837 systemd-resolved[1282]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 13:34:07.134872 systemd-resolved[1282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 13:34:07.141480 systemd-resolved[1282]: Defaulting to hostname 'linux'. Oct 31 13:34:07.253140 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 13:34:07.253321 systemd[1]: Reloading finished in 193 ms. Oct 31 13:34:07.289875 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 13:34:07.291327 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 13:34:07.292753 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 13:34:07.295855 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 13:34:07.308373 systemd[1]: Starting ensure-sysext.service... Oct 31 13:34:07.310139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 13:34:07.322566 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Oct 31 13:34:07.322585 systemd[1]: Reloading... Oct 31 13:34:07.323427 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 13:34:07.323453 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 13:34:07.323683 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 13:34:07.323867 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 13:34:07.324495 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 13:34:07.324686 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Oct 31 13:34:07.324727 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Oct 31 13:34:07.328319 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 13:34:07.328415 systemd-tmpfiles[1354]: Skipping /boot Oct 31 13:34:07.334431 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 13:34:07.334548 systemd-tmpfiles[1354]: Skipping /boot Oct 31 13:34:07.370300 zram_generator::config[1390]: No configuration found. Oct 31 13:34:07.492092 systemd[1]: Reloading finished in 169 ms. Oct 31 13:34:07.516893 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 13:34:07.533206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 13:34:07.541780 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 13:34:07.544170 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 13:34:07.556511 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 13:34:07.560481 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 13:34:07.564747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 13:34:07.569846 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 13:34:07.574112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:34:07.578783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 13:34:07.581055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 13:34:07.584371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 13:34:07.586403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:34:07.586527 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:34:07.589327 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 13:34:07.592749 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 13:34:07.592896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 13:34:07.595762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 13:34:07.595936 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 13:34:07.604337 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 13:34:07.613399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 13:34:07.615324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 13:34:07.618314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:34:07.619895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 13:34:07.622451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 13:34:07.623519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:34:07.623646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:34:07.623729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 13:34:07.623813 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 13:34:07.625324 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 13:34:07.627883 systemd-udevd[1425]: Using default interface naming scheme 'v257'. Oct 31 13:34:07.631270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:34:07.632338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 13:34:07.633011 augenrules[1456]: No rules Oct 31 13:34:07.636304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 13:34:07.637920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:34:07.638022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:34:07.638119 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 13:34:07.640075 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 13:34:07.640285 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 13:34:07.641923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 13:34:07.642084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 13:34:07.644007 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 13:34:07.647477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 13:34:07.649167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 13:34:07.651353 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 13:34:07.651994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 13:34:07.653645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 13:34:07.653820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 13:34:07.659901 systemd[1]: Finished ensure-sysext.service. Oct 31 13:34:07.667485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 13:34:07.668718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 13:34:07.668810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 13:34:07.670407 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 13:34:07.737558 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 31 13:34:07.743486 systemd-networkd[1488]: lo: Link UP Oct 31 13:34:07.743499 systemd-networkd[1488]: lo: Gained carrier Oct 31 13:34:07.744878 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 13:34:07.746684 systemd[1]: Reached target network.target - Network. Oct 31 13:34:07.750251 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 13:34:07.753177 systemd-networkd[1488]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:34:07.753195 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 13:34:07.754640 systemd-networkd[1488]: eth0: Link UP Oct 31 13:34:07.754754 systemd-networkd[1488]: eth0: Gained carrier Oct 31 13:34:07.754774 systemd-networkd[1488]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:34:07.755131 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 13:34:07.766905 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 13:34:07.773500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 13:34:07.775048 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 13:34:07.777660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 13:34:07.782330 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 13:34:07.782993 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Oct 31 13:34:07.784181 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 13:34:07.784241 systemd-timesyncd[1489]: Initial clock synchronization to Fri 2025-10-31 13:34:08.019362 UTC. Oct 31 13:34:07.786479 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 13:34:07.796420 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 13:34:07.859497 ldconfig[1422]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 13:34:07.864354 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 13:34:07.870486 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 13:34:07.888140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:34:07.893087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 13:34:07.923648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:34:07.926147 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 13:34:07.927400 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 13:34:07.928692 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 13:34:07.930103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 13:34:07.931406 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 13:34:07.932739 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 13:34:07.934101 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 13:34:07.934136 systemd[1]: Reached target paths.target - Path Units. Oct 31 13:34:07.935131 systemd[1]: Reached target timers.target - Timer Units. Oct 31 13:34:07.936930 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 13:34:07.939478 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 13:34:07.942240 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 13:34:07.943699 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 13:34:07.945010 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 13:34:07.949097 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 13:34:07.950506 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 13:34:07.952211 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 13:34:07.953466 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 13:34:07.954443 systemd[1]: Reached target basic.target - Basic System. Oct 31 13:34:07.955424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 13:34:07.955457 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 13:34:07.956385 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 13:34:07.958369 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 13:34:07.960231 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 13:34:07.962322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 13:34:07.964442 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 13:34:07.965631 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 13:34:07.968307 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 13:34:07.969318 jq[1537]: false Oct 31 13:34:07.970371 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 13:34:07.973401 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 13:34:07.975914 extend-filesystems[1538]: Found /dev/vda6 Oct 31 13:34:07.976204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 13:34:07.981393 extend-filesystems[1538]: Found /dev/vda9 Oct 31 13:34:07.980331 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 13:34:07.982342 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 13:34:07.982712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 13:34:07.984492 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 13:34:07.984672 extend-filesystems[1538]: Checking size of /dev/vda9 Oct 31 13:34:07.987694 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 13:34:07.991301 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 13:34:07.992763 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 13:34:07.992950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 13:34:07.993251 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 13:34:07.993437 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 13:34:07.995686 jq[1558]: true Oct 31 13:34:07.996596 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 13:34:07.996773 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 13:34:07.998613 extend-filesystems[1538]: Resized partition /dev/vda9 Oct 31 13:34:08.007321 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 13:34:08.013768 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 31 13:34:08.014600 update_engine[1555]: I20251031 13:34:08.013979 1555 main.cc:92] Flatcar Update Engine starting Oct 31 13:34:08.027047 tar[1563]: linux-arm64/LICENSE Oct 31 13:34:08.027492 tar[1563]: linux-arm64/helm Oct 31 13:34:08.028584 jq[1568]: true Oct 31 13:34:08.035177 dbus-daemon[1535]: [system] SELinux support is enabled Oct 31 13:34:08.035420 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 13:34:08.039555 update_engine[1555]: I20251031 13:34:08.039498 1555 update_check_scheduler.cc:74] Next update check in 6m55s Oct 31 13:34:08.041600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 13:34:08.041635 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 13:34:08.044369 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 13:34:08.044391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 13:34:08.057697 systemd[1]: Started update-engine.service - Update Engine. Oct 31 13:34:08.061742 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 13:34:08.064310 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 31 13:34:08.080175 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (Power Button) Oct 31 13:34:08.080704 systemd-logind[1552]: New seat seat0. Oct 31 13:34:08.080916 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 13:34:08.080916 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 13:34:08.080916 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 31 13:34:08.086522 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Oct 31 13:34:08.088024 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Oct 31 13:34:08.083182 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 13:34:08.085356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 13:34:08.090826 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 13:34:08.097566 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 13:34:08.103785 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 13:34:08.134896 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 13:34:08.229453 containerd[1582]: time="2025-10-31T13:34:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 13:34:08.232331 containerd[1582]: time="2025-10-31T13:34:08.231957089Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 31 13:34:08.249613 containerd[1582]: time="2025-10-31T13:34:08.249550830Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.059µs" Oct 31 13:34:08.249613 containerd[1582]: time="2025-10-31T13:34:08.249588218Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 13:34:08.249613 containerd[1582]: time="2025-10-31T13:34:08.249607159Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 13:34:08.249899 containerd[1582]: time="2025-10-31T13:34:08.249859694Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 13:34:08.249969 containerd[1582]: time="2025-10-31T13:34:08.249939988Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 13:34:08.249994 containerd[1582]: time="2025-10-31T13:34:08.249978406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250113 containerd[1582]: time="2025-10-31T13:34:08.250080359Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250113 containerd[1582]: time="2025-10-31T13:34:08.250104406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250454 containerd[1582]: time="2025-10-31T13:34:08.250416359Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250454 containerd[1582]: time="2025-10-31T13:34:08.250442547Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250507 containerd[1582]: time="2025-10-31T13:34:08.250456300Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250526 containerd[1582]: time="2025-10-31T13:34:08.250513082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 13:34:08.250694 containerd[1582]: time="2025-10-31T13:34:08.250662429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 13:34:08.251013 containerd[1582]: time="2025-10-31T13:34:08.250978829Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 13:34:08.251038 containerd[1582]: time="2025-10-31T13:34:08.251022146Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 13:34:08.251038 containerd[1582]: time="2025-10-31T13:34:08.251034170Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 13:34:08.251125 containerd[1582]: time="2025-10-31T13:34:08.251109234Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 13:34:08.251605 containerd[1582]: time="2025-10-31T13:34:08.251567775Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 13:34:08.251679 containerd[1582]: time="2025-10-31T13:34:08.251662604Z" level=info msg="metadata content store policy set" policy=shared Oct 31 13:34:08.255621 containerd[1582]: time="2025-10-31T13:34:08.255588283Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 13:34:08.255687 containerd[1582]: time="2025-10-31T13:34:08.255650212Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 13:34:08.255687 containerd[1582]: time="2025-10-31T13:34:08.255666683Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 13:34:08.255687 containerd[1582]: time="2025-10-31T13:34:08.255678130Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 13:34:08.255741 containerd[1582]: time="2025-10-31T13:34:08.255698224Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 13:34:08.255741 containerd[1582]: time="2025-10-31T13:34:08.255709794Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 13:34:08.255741 containerd[1582]: time="2025-10-31T13:34:08.255721077Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 13:34:08.255741 containerd[1582]: time="2025-10-31T13:34:08.255737959Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 13:34:08.255836 containerd[1582]: time="2025-10-31T13:34:08.255814012Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 13:34:08.255860 containerd[1582]: time="2025-10-31T13:34:08.255838553Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 13:34:08.255860 containerd[1582]: time="2025-10-31T13:34:08.255850782Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 13:34:08.255892 containerd[1582]: time="2025-10-31T13:34:08.255863135Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 13:34:08.256048 containerd[1582]: time="2025-10-31T13:34:08.256027429Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 13:34:08.256132 containerd[1582]: time="2025-10-31T13:34:08.256066917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 13:34:08.256156 containerd[1582]: time="2025-10-31T13:34:08.256141900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 13:34:08.256175 containerd[1582]: time="2025-10-31T13:34:08.256158041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 13:34:08.256175 containerd[1582]: time="2025-10-31T13:34:08.256170188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 13:34:08.256206 containerd[1582]: time="2025-10-31T13:34:08.256179988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 13:34:08.256206 containerd[1582]: time="2025-10-31T13:34:08.256191682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 13:34:08.256206 containerd[1582]: time="2025-10-31T13:34:08.256202223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 13:34:08.256300 containerd[1582]: time="2025-10-31T13:34:08.256256988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 13:34:08.256300 containerd[1582]: time="2025-10-31T13:34:08.256273829Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 13:34:08.256336 containerd[1582]: time="2025-10-31T13:34:08.256284905Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 13:34:08.256696 containerd[1582]: time="2025-10-31T13:34:08.256671593Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 13:34:08.256752 containerd[1582]: time="2025-10-31T13:34:08.256737929Z" level=info msg="Start snapshots syncer" Oct 31 13:34:08.256782 containerd[1582]: time="2025-10-31T13:34:08.256770376Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 13:34:08.257345 containerd[1582]: time="2025-10-31T13:34:08.257294016Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 13:34:08.257427 containerd[1582]: time="2025-10-31T13:34:08.257374475Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 13:34:08.257458 containerd[1582]: time="2025-10-31T13:34:08.257442416Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 13:34:08.257567 containerd[1582]: time="2025-10-31T13:34:08.257545645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 13:34:08.257598 containerd[1582]: time="2025-10-31T13:34:08.257584104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 13:34:08.257619 containerd[1582]: time="2025-10-31T13:34:08.257596951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 13:34:08.257619 containerd[1582]: time="2025-10-31T13:34:08.257607204Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 13:34:08.257651 containerd[1582]: time="2025-10-31T13:34:08.257620710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 13:34:08.257651 containerd[1582]: time="2025-10-31T13:34:08.257631581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 13:34:08.257651 containerd[1582]: time="2025-10-31T13:34:08.257641957Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 13:34:08.257698 containerd[1582]: time="2025-10-31T13:34:08.257664481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 13:34:08.257698 containerd[1582]: time="2025-10-31T13:34:08.257675681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 13:34:08.257698 containerd[1582]: time="2025-10-31T13:34:08.257686181Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 13:34:08.257755 containerd[1582]: time="2025-10-31T13:34:08.257715869Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 13:34:08.257755 containerd[1582]: time="2025-10-31T13:34:08.257730239Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 13:34:08.257755 containerd[1582]: time="2025-10-31T13:34:08.257738639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 13:34:08.257813 containerd[1582]: time="2025-10-31T13:34:08.257747780Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 13:34:08.257813 containerd[1582]: time="2025-10-31T13:34:08.257798263Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 13:34:08.257855 containerd[1582]: time="2025-10-31T13:34:08.257814075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 13:34:08.257855 containerd[1582]: time="2025-10-31T13:34:08.257825769Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 13:34:08.257963 containerd[1582]: time="2025-10-31T13:34:08.257942998Z" level=info msg="runtime interface created" Oct 31 13:34:08.257963 containerd[1582]: time="2025-10-31T13:34:08.257956216Z" level=info msg="created NRI interface" Oct 31 13:34:08.258003 containerd[1582]: time="2025-10-31T13:34:08.257967621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 13:34:08.258003 containerd[1582]: time="2025-10-31T13:34:08.257979027Z" level=info msg="Connect containerd service" Oct 31 13:34:08.258040 containerd[1582]: time="2025-10-31T13:34:08.258005545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 13:34:08.258761 containerd[1582]: time="2025-10-31T13:34:08.258731691Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 13:34:08.287218 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 13:34:08.308351 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 13:34:08.311598 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 13:34:08.329354 containerd[1582]: time="2025-10-31T13:34:08.329294378Z" level=info msg="Start subscribing containerd event" Oct 31 13:34:08.329417 containerd[1582]: time="2025-10-31T13:34:08.329365820Z" level=info msg="Start recovering state" Oct 31 13:34:08.329551 containerd[1582]: time="2025-10-31T13:34:08.329467031Z" level=info msg="Start event monitor" Oct 31 13:34:08.329551 containerd[1582]: time="2025-10-31T13:34:08.329492066Z" level=info msg="Start cni network conf syncer for default" Oct 31 13:34:08.329551 containerd[1582]: time="2025-10-31T13:34:08.329501990Z" level=info msg="Start streaming server" Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329608966Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329627866Z" level=info msg="runtime interface starting up..." Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329635855Z" level=info msg="starting plugins..." Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329651913Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329797431Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329844948Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 13:34:08.330306 containerd[1582]: time="2025-10-31T13:34:08.329934919Z" level=info msg="containerd successfully booted in 0.100865s" Oct 31 13:34:08.330018 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 13:34:08.332609 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 13:34:08.333319 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 13:34:08.340514 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 13:34:08.369555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 13:34:08.372700 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 13:34:08.375597 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 31 13:34:08.377200 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 13:34:08.388824 tar[1563]: linux-arm64/README.md Oct 31 13:34:08.409386 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 13:34:08.840636 systemd-networkd[1488]: eth0: Gained IPv6LL Oct 31 13:34:08.842839 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 13:34:08.844786 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 13:34:08.847191 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 13:34:08.849526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:08.868679 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 13:34:08.886734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 13:34:08.888343 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 13:34:08.888530 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 13:34:08.890688 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 13:34:09.428803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:09.430669 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 13:34:09.432472 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 13:34:09.435929 systemd[1]: Startup finished in 1.455s (kernel) + 6.284s (initrd) + 3.192s (userspace) = 10.932s. Oct 31 13:34:09.793173 kubelet[1676]: E1031 13:34:09.793131 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 13:34:09.795521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 13:34:09.795659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 13:34:09.797373 systemd[1]: kubelet.service: Consumed 758ms CPU time, 258.8M memory peak. Oct 31 13:34:11.320953 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 13:34:11.322144 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:60750.service - OpenSSH per-connection server daemon (10.0.0.1:60750). Oct 31 13:34:11.394670 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 60750 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:11.396301 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:11.402027 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 13:34:11.402969 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 13:34:11.407789 systemd-logind[1552]: New session 1 of user core. Oct 31 13:34:11.427425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 13:34:11.431521 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 13:34:11.459208 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 13:34:11.461443 systemd-logind[1552]: New session c1 of user core. Oct 31 13:34:11.567595 systemd[1695]: Queued start job for default target default.target. Oct 31 13:34:11.579169 systemd[1695]: Created slice app.slice - User Application Slice. Oct 31 13:34:11.579198 systemd[1695]: Reached target paths.target - Paths. Oct 31 13:34:11.579234 systemd[1695]: Reached target timers.target - Timers. Oct 31 13:34:11.580964 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 13:34:11.590079 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 13:34:11.590141 systemd[1695]: Reached target sockets.target - Sockets. Oct 31 13:34:11.590180 systemd[1695]: Reached target basic.target - Basic System. Oct 31 13:34:11.590208 systemd[1695]: Reached target default.target - Main User Target. Oct 31 13:34:11.590231 systemd[1695]: Startup finished in 123ms. Oct 31 13:34:11.590473 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 13:34:11.592023 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 13:34:11.600881 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:60752.service - OpenSSH per-connection server daemon (10.0.0.1:60752). Oct 31 13:34:11.647234 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 60752 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:11.648394 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:11.653098 systemd-logind[1552]: New session 2 of user core. Oct 31 13:34:11.662439 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 13:34:11.673623 sshd[1709]: Connection closed by 10.0.0.1 port 60752 Oct 31 13:34:11.674418 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Oct 31 13:34:11.690375 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:60752.service: Deactivated successfully. Oct 31 13:34:11.692731 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 13:34:11.693383 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Oct 31 13:34:11.695715 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:60760.service - OpenSSH per-connection server daemon (10.0.0.1:60760). Oct 31 13:34:11.696188 systemd-logind[1552]: Removed session 2. Oct 31 13:34:11.757197 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 60760 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:11.758325 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:11.761989 systemd-logind[1552]: New session 3 of user core. Oct 31 13:34:11.778431 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 13:34:11.784891 sshd[1718]: Connection closed by 10.0.0.1 port 60760 Oct 31 13:34:11.785208 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Oct 31 13:34:11.789226 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:60760.service: Deactivated successfully. Oct 31 13:34:11.790684 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 13:34:11.791308 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Oct 31 13:34:11.795705 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:60768.service - OpenSSH per-connection server daemon (10.0.0.1:60768). Oct 31 13:34:11.796199 systemd-logind[1552]: Removed session 3. Oct 31 13:34:11.855435 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 60768 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:11.856898 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:11.861744 systemd-logind[1552]: New session 4 of user core. Oct 31 13:34:11.873495 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 13:34:11.884341 sshd[1727]: Connection closed by 10.0.0.1 port 60768 Oct 31 13:34:11.884997 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Oct 31 13:34:11.898295 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:60768.service: Deactivated successfully. Oct 31 13:34:11.900815 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 13:34:11.903351 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Oct 31 13:34:11.905347 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:60784.service - OpenSSH per-connection server daemon (10.0.0.1:60784). Oct 31 13:34:11.905928 systemd-logind[1552]: Removed session 4. Oct 31 13:34:11.967754 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 60784 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:11.968961 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:11.972947 systemd-logind[1552]: New session 5 of user core. Oct 31 13:34:11.989446 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 13:34:12.005458 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 13:34:12.006006 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:34:12.020149 sudo[1738]: pam_unix(sudo:session): session closed for user root Oct 31 13:34:12.021734 sshd[1737]: Connection closed by 10.0.0.1 port 60784 Oct 31 13:34:12.022219 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 31 13:34:12.042430 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:60784.service: Deactivated successfully. Oct 31 13:34:12.045720 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 13:34:12.046386 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Oct 31 13:34:12.048745 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:60790.service - OpenSSH per-connection server daemon (10.0.0.1:60790). Oct 31 13:34:12.049451 systemd-logind[1552]: Removed session 5. Oct 31 13:34:12.102921 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 60790 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:12.104075 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:12.107891 systemd-logind[1552]: New session 6 of user core. Oct 31 13:34:12.118412 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 13:34:12.128446 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 13:34:12.128700 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:34:12.133686 sudo[1749]: pam_unix(sudo:session): session closed for user root Oct 31 13:34:12.139347 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 13:34:12.139592 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:34:12.148811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 13:34:12.182588 augenrules[1771]: No rules Oct 31 13:34:12.183794 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 13:34:12.185334 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 13:34:12.187446 sudo[1748]: pam_unix(sudo:session): session closed for user root Oct 31 13:34:12.188948 sshd[1747]: Connection closed by 10.0.0.1 port 60790 Oct 31 13:34:12.189258 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Oct 31 13:34:12.205390 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:60790.service: Deactivated successfully. Oct 31 13:34:12.206951 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 13:34:12.207640 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Oct 31 13:34:12.209939 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:60794.service - OpenSSH per-connection server daemon (10.0.0.1:60794). Oct 31 13:34:12.210623 systemd-logind[1552]: Removed session 6. Oct 31 13:34:12.263254 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 60794 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:34:12.264484 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:34:12.268345 systemd-logind[1552]: New session 7 of user core. Oct 31 13:34:12.280452 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 13:34:12.291047 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 13:34:12.291346 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:34:12.557268 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 13:34:12.575564 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 13:34:12.772056 dockerd[1805]: time="2025-10-31T13:34:12.771990793Z" level=info msg="Starting up" Oct 31 13:34:12.773904 dockerd[1805]: time="2025-10-31T13:34:12.773873460Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 13:34:12.784332 dockerd[1805]: time="2025-10-31T13:34:12.784265462Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 13:34:12.997893 dockerd[1805]: time="2025-10-31T13:34:12.997788945Z" level=info msg="Loading containers: start." Oct 31 13:34:13.006321 kernel: Initializing XFRM netlink socket Oct 31 13:34:13.182842 systemd-networkd[1488]: docker0: Link UP Oct 31 13:34:13.186145 dockerd[1805]: time="2025-10-31T13:34:13.186103707Z" level=info msg="Loading containers: done." Oct 31 13:34:13.199402 dockerd[1805]: time="2025-10-31T13:34:13.199354261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 13:34:13.199541 dockerd[1805]: time="2025-10-31T13:34:13.199428566Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 13:34:13.199612 dockerd[1805]: time="2025-10-31T13:34:13.199574372Z" level=info msg="Initializing buildkit" Oct 31 13:34:13.220675 dockerd[1805]: time="2025-10-31T13:34:13.220640594Z" level=info msg="Completed buildkit initialization" Oct 31 13:34:13.225145 dockerd[1805]: time="2025-10-31T13:34:13.225117241Z" level=info msg="Daemon has completed initialization" Oct 31 13:34:13.225370 dockerd[1805]: time="2025-10-31T13:34:13.225206690Z" level=info msg="API listen on /run/docker.sock" Oct 31 13:34:13.225374 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 13:34:13.862467 containerd[1582]: time="2025-10-31T13:34:13.862395981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 31 13:34:14.576303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471271858.mount: Deactivated successfully. Oct 31 13:34:15.744906 containerd[1582]: time="2025-10-31T13:34:15.743313782Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Oct 31 13:34:15.744906 containerd[1582]: time="2025-10-31T13:34:15.743506340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:15.746157 containerd[1582]: time="2025-10-31T13:34:15.746104161Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:15.747339 containerd[1582]: time="2025-10-31T13:34:15.747312356Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.884879087s" Oct 31 13:34:15.747394 containerd[1582]: time="2025-10-31T13:34:15.747347153Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Oct 31 13:34:15.749233 containerd[1582]: time="2025-10-31T13:34:15.749182711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 31 13:34:15.749352 containerd[1582]: time="2025-10-31T13:34:15.749325825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:17.107815 containerd[1582]: time="2025-10-31T13:34:17.107765504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:17.108812 containerd[1582]: time="2025-10-31T13:34:17.108560553Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Oct 31 13:34:17.109576 containerd[1582]: time="2025-10-31T13:34:17.109542359Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:17.112115 containerd[1582]: time="2025-10-31T13:34:17.112086740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:17.113862 containerd[1582]: time="2025-10-31T13:34:17.113821505Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.364605434s" Oct 31 13:34:17.113862 containerd[1582]: time="2025-10-31T13:34:17.113858469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Oct 31 13:34:17.114340 containerd[1582]: time="2025-10-31T13:34:17.114320923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 31 13:34:18.442419 containerd[1582]: time="2025-10-31T13:34:18.442349481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:18.443055 containerd[1582]: time="2025-10-31T13:34:18.443009509Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Oct 31 13:34:18.443685 containerd[1582]: time="2025-10-31T13:34:18.443652929Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:18.446956 containerd[1582]: time="2025-10-31T13:34:18.446903728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:18.448417 containerd[1582]: time="2025-10-31T13:34:18.448370106Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.334021752s" Oct 31 13:34:18.448417 containerd[1582]: time="2025-10-31T13:34:18.448414648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Oct 31 13:34:18.448956 containerd[1582]: time="2025-10-31T13:34:18.448930569Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 31 13:34:19.474302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91391200.mount: Deactivated successfully. Oct 31 13:34:19.896525 containerd[1582]: time="2025-10-31T13:34:19.896406401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:19.897426 containerd[1582]: time="2025-10-31T13:34:19.897329045Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Oct 31 13:34:19.898121 containerd[1582]: time="2025-10-31T13:34:19.898055530Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:19.899990 containerd[1582]: time="2025-10-31T13:34:19.899944594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:19.900787 containerd[1582]: time="2025-10-31T13:34:19.900443186Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.451474939s" Oct 31 13:34:19.900787 containerd[1582]: time="2025-10-31T13:34:19.900484263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Oct 31 13:34:19.901000 containerd[1582]: time="2025-10-31T13:34:19.900894501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 31 13:34:19.945017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 13:34:19.946472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:20.074790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:20.079174 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 13:34:20.121462 kubelet[2109]: E1031 13:34:20.121386 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 13:34:20.124902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 13:34:20.125030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 13:34:20.126357 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.7M memory peak. Oct 31 13:34:20.606839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811941576.mount: Deactivated successfully. Oct 31 13:34:21.547475 containerd[1582]: time="2025-10-31T13:34:21.547409373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:21.561945 containerd[1582]: time="2025-10-31T13:34:21.561893261Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Oct 31 13:34:21.575621 containerd[1582]: time="2025-10-31T13:34:21.575542806Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:21.590550 containerd[1582]: time="2025-10-31T13:34:21.590504076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:21.591699 containerd[1582]: time="2025-10-31T13:34:21.591546893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.690618899s" Oct 31 13:34:21.591699 containerd[1582]: time="2025-10-31T13:34:21.591582477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Oct 31 13:34:21.592306 containerd[1582]: time="2025-10-31T13:34:21.592281401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 13:34:22.117385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935686729.mount: Deactivated successfully. Oct 31 13:34:22.123863 containerd[1582]: time="2025-10-31T13:34:22.123827596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:34:22.124464 containerd[1582]: time="2025-10-31T13:34:22.124262680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 31 13:34:22.125235 containerd[1582]: time="2025-10-31T13:34:22.125202242Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:34:22.127233 containerd[1582]: time="2025-10-31T13:34:22.127193352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:34:22.128050 containerd[1582]: time="2025-10-31T13:34:22.128028483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 535.715966ms" Oct 31 13:34:22.128107 containerd[1582]: time="2025-10-31T13:34:22.128054962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 31 13:34:22.128640 containerd[1582]: time="2025-10-31T13:34:22.128616538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 31 13:34:22.780804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424563701.mount: Deactivated successfully. Oct 31 13:34:24.959840 containerd[1582]: time="2025-10-31T13:34:24.959782407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:24.961610 containerd[1582]: time="2025-10-31T13:34:24.961576174Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Oct 31 13:34:24.962270 containerd[1582]: time="2025-10-31T13:34:24.962223213Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:24.966049 containerd[1582]: time="2025-10-31T13:34:24.965619044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:24.966641 containerd[1582]: time="2025-10-31T13:34:24.966613325Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.837968426s" Oct 31 13:34:24.966692 containerd[1582]: time="2025-10-31T13:34:24.966649249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Oct 31 13:34:30.195406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 13:34:30.196879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:30.331771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:30.343511 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 13:34:30.374932 kubelet[2261]: E1031 13:34:30.374885 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 13:34:30.377412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 13:34:30.377644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 13:34:30.378010 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.3M memory peak. Oct 31 13:34:31.222208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:31.222390 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.3M memory peak. Oct 31 13:34:31.224292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:31.241779 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-7.scope)... Oct 31 13:34:31.241795 systemd[1]: Reloading... Oct 31 13:34:31.316288 zram_generator::config[2320]: No configuration found. Oct 31 13:34:31.614789 systemd[1]: Reloading finished in 372 ms. Oct 31 13:34:31.683693 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 13:34:31.683766 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 13:34:31.684037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:31.684081 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95.1M memory peak. Oct 31 13:34:31.686543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:31.797782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:31.801326 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 13:34:31.833781 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 13:34:31.833781 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 13:34:31.833781 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 13:34:31.834055 kubelet[2365]: I1031 13:34:31.833827 2365 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 13:34:32.463280 kubelet[2365]: I1031 13:34:32.463230 2365 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 13:34:32.463280 kubelet[2365]: I1031 13:34:32.463275 2365 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 13:34:32.463517 kubelet[2365]: I1031 13:34:32.463489 2365 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 13:34:32.481444 kubelet[2365]: E1031 13:34:32.481398 2365 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 13:34:32.482678 kubelet[2365]: I1031 13:34:32.482384 2365 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 13:34:32.491087 kubelet[2365]: I1031 13:34:32.491064 2365 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 13:34:32.494352 kubelet[2365]: I1031 13:34:32.494318 2365 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 13:34:32.495369 kubelet[2365]: I1031 13:34:32.495326 2365 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 13:34:32.495510 kubelet[2365]: I1031 13:34:32.495362 2365 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 13:34:32.495605 kubelet[2365]: I1031 13:34:32.495571 2365 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 13:34:32.495605 kubelet[2365]: I1031 13:34:32.495579 2365 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 13:34:32.496285 kubelet[2365]: I1031 13:34:32.496248 2365 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:34:32.498698 kubelet[2365]: I1031 13:34:32.498658 2365 kubelet.go:480] "Attempting to sync node with API server" Oct 31 13:34:32.498698 kubelet[2365]: I1031 13:34:32.498683 2365 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 13:34:32.498764 kubelet[2365]: I1031 13:34:32.498704 2365 kubelet.go:386] "Adding apiserver pod source" Oct 31 13:34:32.499769 kubelet[2365]: I1031 13:34:32.499671 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 13:34:32.500604 kubelet[2365]: I1031 13:34:32.500582 2365 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 13:34:32.500808 kubelet[2365]: E1031 13:34:32.500781 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 13:34:32.501340 kubelet[2365]: I1031 13:34:32.501314 2365 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 13:34:32.501496 kubelet[2365]: W1031 13:34:32.501435 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 13:34:32.501811 kubelet[2365]: E1031 13:34:32.501774 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 13:34:32.503536 kubelet[2365]: I1031 13:34:32.503518 2365 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 13:34:32.503911 kubelet[2365]: I1031 13:34:32.503557 2365 server.go:1289] "Started kubelet" Oct 31 13:34:32.503911 kubelet[2365]: I1031 13:34:32.503605 2365 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 13:34:32.504515 kubelet[2365]: I1031 13:34:32.504495 2365 server.go:317] "Adding debug handlers to kubelet server" Oct 31 13:34:32.504700 kubelet[2365]: I1031 13:34:32.504644 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 13:34:32.507564 kubelet[2365]: E1031 13:34:32.506507 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187396c9c10e5ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 13:34:32.503533246 +0000 UTC m=+0.698513561,LastTimestamp:2025-10-31 13:34:32.503533246 +0000 UTC m=+0.698513561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 13:34:32.507748 kubelet[2365]: I1031 13:34:32.507650 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 13:34:32.507851 kubelet[2365]: I1031 13:34:32.507834 2365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 13:34:32.508349 kubelet[2365]: I1031 13:34:32.508323 2365 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 13:34:32.508521 kubelet[2365]: E1031 13:34:32.508504 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:34:32.508598 kubelet[2365]: I1031 13:34:32.508589 2365 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 13:34:32.508818 kubelet[2365]: I1031 13:34:32.508791 2365 reconciler.go:26] "Reconciler: start to sync state" Oct 31 13:34:32.508818 kubelet[2365]: I1031 13:34:32.508815 2365 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 13:34:32.509238 kubelet[2365]: E1031 13:34:32.509210 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 13:34:32.510101 kubelet[2365]: E1031 13:34:32.509592 2365 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 13:34:32.510101 kubelet[2365]: E1031 13:34:32.509834 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Oct 31 13:34:32.510590 kubelet[2365]: I1031 13:34:32.510563 2365 factory.go:223] Registration of the containerd container factory successfully Oct 31 13:34:32.510590 kubelet[2365]: I1031 13:34:32.510587 2365 factory.go:223] Registration of the systemd container factory successfully Oct 31 13:34:32.510729 kubelet[2365]: I1031 13:34:32.510704 2365 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 13:34:32.524766 kubelet[2365]: I1031 13:34:32.524747 2365 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 13:34:32.524766 kubelet[2365]: I1031 13:34:32.524760 2365 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 13:34:32.524857 kubelet[2365]: I1031 13:34:32.524788 2365 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:34:32.525800 kubelet[2365]: I1031 13:34:32.525772 2365 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 13:34:32.526853 kubelet[2365]: I1031 13:34:32.526836 2365 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 13:34:32.526923 kubelet[2365]: I1031 13:34:32.526915 2365 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 13:34:32.527109 kubelet[2365]: I1031 13:34:32.527095 2365 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 13:34:32.527168 kubelet[2365]: I1031 13:34:32.527160 2365 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 13:34:32.527281 kubelet[2365]: E1031 13:34:32.527246 2365 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 13:34:32.607510 kubelet[2365]: I1031 13:34:32.607463 2365 policy_none.go:49] "None policy: Start" Oct 31 13:34:32.607510 kubelet[2365]: I1031 13:34:32.607499 2365 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 13:34:32.607510 kubelet[2365]: I1031 13:34:32.607512 2365 state_mem.go:35] "Initializing new in-memory state store" Oct 31 13:34:32.607774 kubelet[2365]: E1031 13:34:32.607745 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 13:34:32.609056 kubelet[2365]: E1031 13:34:32.609030 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:34:32.612802 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 13:34:32.627616 kubelet[2365]: E1031 13:34:32.627565 2365 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 13:34:32.631732 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 13:34:32.634369 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 13:34:32.657031 kubelet[2365]: E1031 13:34:32.656988 2365 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 13:34:32.657208 kubelet[2365]: I1031 13:34:32.657180 2365 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 13:34:32.657255 kubelet[2365]: I1031 13:34:32.657200 2365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 13:34:32.657575 kubelet[2365]: I1031 13:34:32.657448 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 13:34:32.658402 kubelet[2365]: E1031 13:34:32.658371 2365 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 13:34:32.658450 kubelet[2365]: E1031 13:34:32.658415 2365 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 13:34:32.710792 kubelet[2365]: E1031 13:34:32.710736 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Oct 31 13:34:32.759128 kubelet[2365]: I1031 13:34:32.759036 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:34:32.760213 kubelet[2365]: E1031 13:34:32.759454 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Oct 31 13:34:32.838414 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 31 13:34:32.864685 kubelet[2365]: E1031 13:34:32.864464 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:32.866537 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 31 13:34:32.874451 kubelet[2365]: E1031 13:34:32.874414 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:32.877190 systemd[1]: Created slice kubepods-burstable-pod606e96b964506b0f1a9fd78a6d24b5a1.slice - libcontainer container kubepods-burstable-pod606e96b964506b0f1a9fd78a6d24b5a1.slice. Oct 31 13:34:32.878910 kubelet[2365]: E1031 13:34:32.878875 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:32.910281 kubelet[2365]: I1031 13:34:32.910152 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:32.910281 kubelet[2365]: I1031 13:34:32.910185 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:32.910281 kubelet[2365]: I1031 13:34:32.910208 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 13:34:32.910281 kubelet[2365]: I1031 13:34:32.910222 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/606e96b964506b0f1a9fd78a6d24b5a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"606e96b964506b0f1a9fd78a6d24b5a1\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:32.910281 kubelet[2365]: I1031 13:34:32.910246 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/606e96b964506b0f1a9fd78a6d24b5a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"606e96b964506b0f1a9fd78a6d24b5a1\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:32.910429 kubelet[2365]: I1031 13:34:32.910298 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/606e96b964506b0f1a9fd78a6d24b5a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"606e96b964506b0f1a9fd78a6d24b5a1\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:32.910429 kubelet[2365]: I1031 13:34:32.910340 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:32.910429 kubelet[2365]: I1031 13:34:32.910360 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:32.910429 kubelet[2365]: I1031 13:34:32.910380 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:32.961518 kubelet[2365]: I1031 13:34:32.961470 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:34:32.961906 kubelet[2365]: E1031 13:34:32.961868 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Oct 31 13:34:33.111278 kubelet[2365]: E1031 13:34:33.111151 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Oct 31 13:34:33.165583 kubelet[2365]: E1031 13:34:33.165545 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.166242 containerd[1582]: time="2025-10-31T13:34:33.166157162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 31 13:34:33.175587 kubelet[2365]: E1031 13:34:33.175534 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.176107 containerd[1582]: time="2025-10-31T13:34:33.176065336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 31 13:34:33.179593 kubelet[2365]: E1031 13:34:33.179372 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.179827 containerd[1582]: time="2025-10-31T13:34:33.179795586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:606e96b964506b0f1a9fd78a6d24b5a1,Namespace:kube-system,Attempt:0,}" Oct 31 13:34:33.184756 containerd[1582]: time="2025-10-31T13:34:33.184714156Z" level=info msg="connecting to shim ca9a86c20302c44945790690abb0fc52b616a6d7e7e031bcdca90e8cf8eff18e" address="unix:///run/containerd/s/6c34d5070acc174ed931c824c1ab2713dcd57aedc37048f6589022659a1deb41" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:34:33.211036 containerd[1582]: time="2025-10-31T13:34:33.210948676Z" level=info msg="connecting to shim bc19aff9827278e477b38e687fbdb5c83b9a5fbfd2b1277e905ad69f4ef257e6" address="unix:///run/containerd/s/541389b19e9998f2336c041928b6740ed58ce10a4417838535c5367fa4509b02" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:34:33.211802 containerd[1582]: time="2025-10-31T13:34:33.211755238Z" level=info msg="connecting to shim 727072d934a8baecc445b199e73b556df35edf0959206f53211e6404dce932b7" address="unix:///run/containerd/s/97774beedd7a6459eacb6c735b0577d190933606634c278e457c737b60a7486d" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:34:33.214414 systemd[1]: Started cri-containerd-ca9a86c20302c44945790690abb0fc52b616a6d7e7e031bcdca90e8cf8eff18e.scope - libcontainer container ca9a86c20302c44945790690abb0fc52b616a6d7e7e031bcdca90e8cf8eff18e. Oct 31 13:34:33.237420 systemd[1]: Started cri-containerd-bc19aff9827278e477b38e687fbdb5c83b9a5fbfd2b1277e905ad69f4ef257e6.scope - libcontainer container bc19aff9827278e477b38e687fbdb5c83b9a5fbfd2b1277e905ad69f4ef257e6. Oct 31 13:34:33.241870 systemd[1]: Started cri-containerd-727072d934a8baecc445b199e73b556df35edf0959206f53211e6404dce932b7.scope - libcontainer container 727072d934a8baecc445b199e73b556df35edf0959206f53211e6404dce932b7. Oct 31 13:34:33.273277 containerd[1582]: time="2025-10-31T13:34:33.273032305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca9a86c20302c44945790690abb0fc52b616a6d7e7e031bcdca90e8cf8eff18e\"" Oct 31 13:34:33.274488 kubelet[2365]: E1031 13:34:33.274042 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.277727 containerd[1582]: time="2025-10-31T13:34:33.277684037Z" level=info msg="CreateContainer within sandbox \"ca9a86c20302c44945790690abb0fc52b616a6d7e7e031bcdca90e8cf8eff18e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 13:34:33.285041 containerd[1582]: time="2025-10-31T13:34:33.285008035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc19aff9827278e477b38e687fbdb5c83b9a5fbfd2b1277e905ad69f4ef257e6\"" Oct 31 13:34:33.285853 kubelet[2365]: E1031 13:34:33.285812 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.288089 containerd[1582]: time="2025-10-31T13:34:33.288057655Z" level=info msg="Container 6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:34:33.288946 containerd[1582]: time="2025-10-31T13:34:33.288886640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:606e96b964506b0f1a9fd78a6d24b5a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"727072d934a8baecc445b199e73b556df35edf0959206f53211e6404dce932b7\"" Oct 31 13:34:33.289236 containerd[1582]: time="2025-10-31T13:34:33.289214422Z" level=info msg="CreateContainer within sandbox \"bc19aff9827278e477b38e687fbdb5c83b9a5fbfd2b1277e905ad69f4ef257e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 13:34:33.289745 kubelet[2365]: E1031 13:34:33.289612 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.295508 containerd[1582]: time="2025-10-31T13:34:33.295475472Z" level=info msg="CreateContainer within sandbox \"727072d934a8baecc445b199e73b556df35edf0959206f53211e6404dce932b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 13:34:33.299031 containerd[1582]: time="2025-10-31T13:34:33.298992299Z" level=info msg="CreateContainer within sandbox \"ca9a86c20302c44945790690abb0fc52b616a6d7e7e031bcdca90e8cf8eff18e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033\"" Oct 31 13:34:33.299701 containerd[1582]: time="2025-10-31T13:34:33.299677734Z" level=info msg="StartContainer for \"6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033\"" Oct 31 13:34:33.303028 containerd[1582]: time="2025-10-31T13:34:33.302974773Z" level=info msg="connecting to shim 6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033" address="unix:///run/containerd/s/6c34d5070acc174ed931c824c1ab2713dcd57aedc37048f6589022659a1deb41" protocol=ttrpc version=3 Oct 31 13:34:33.303821 containerd[1582]: time="2025-10-31T13:34:33.303792586Z" level=info msg="Container fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:34:33.310280 containerd[1582]: time="2025-10-31T13:34:33.309639123Z" level=info msg="Container 8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:34:33.314648 containerd[1582]: time="2025-10-31T13:34:33.314609427Z" level=info msg="CreateContainer within sandbox \"bc19aff9827278e477b38e687fbdb5c83b9a5fbfd2b1277e905ad69f4ef257e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043\"" Oct 31 13:34:33.315108 containerd[1582]: time="2025-10-31T13:34:33.315081359Z" level=info msg="StartContainer for \"fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043\"" Oct 31 13:34:33.316505 containerd[1582]: time="2025-10-31T13:34:33.316447864Z" level=info msg="connecting to shim fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043" address="unix:///run/containerd/s/541389b19e9998f2336c041928b6740ed58ce10a4417838535c5367fa4509b02" protocol=ttrpc version=3 Oct 31 13:34:33.320078 containerd[1582]: time="2025-10-31T13:34:33.320018869Z" level=info msg="CreateContainer within sandbox \"727072d934a8baecc445b199e73b556df35edf0959206f53211e6404dce932b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67\"" Oct 31 13:34:33.321587 containerd[1582]: time="2025-10-31T13:34:33.320490400Z" level=info msg="StartContainer for \"8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67\"" Oct 31 13:34:33.321587 containerd[1582]: time="2025-10-31T13:34:33.321530445Z" level=info msg="connecting to shim 8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67" address="unix:///run/containerd/s/97774beedd7a6459eacb6c735b0577d190933606634c278e457c737b60a7486d" protocol=ttrpc version=3 Oct 31 13:34:33.330411 systemd[1]: Started cri-containerd-6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033.scope - libcontainer container 6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033. Oct 31 13:34:33.335294 systemd[1]: Started cri-containerd-fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043.scope - libcontainer container fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043. Oct 31 13:34:33.356486 systemd[1]: Started cri-containerd-8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67.scope - libcontainer container 8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67. Oct 31 13:34:33.365249 kubelet[2365]: I1031 13:34:33.365099 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:34:33.365702 kubelet[2365]: E1031 13:34:33.365658 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Oct 31 13:34:33.377521 containerd[1582]: time="2025-10-31T13:34:33.377476072Z" level=info msg="StartContainer for \"6f1f79f4a40d81e5329c7a7ced7fad359c9f5f2bb067a0d896db18cf14337033\" returns successfully" Oct 31 13:34:33.398940 containerd[1582]: time="2025-10-31T13:34:33.398837631Z" level=info msg="StartContainer for \"fe226a44ec152f5fba97ae69cba8c2650cace96ba9ac5150d2a39a1f54748043\" returns successfully" Oct 31 13:34:33.407871 containerd[1582]: time="2025-10-31T13:34:33.407831211Z" level=info msg="StartContainer for \"8aec1f5d163955b264118bc3b7836d0951fd6f3090b03e59d914c16a8d10fa67\" returns successfully" Oct 31 13:34:33.456163 kubelet[2365]: E1031 13:34:33.456106 2365 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 13:34:33.533348 kubelet[2365]: E1031 13:34:33.533309 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:33.533494 kubelet[2365]: E1031 13:34:33.533439 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.535738 kubelet[2365]: E1031 13:34:33.535714 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:33.536718 kubelet[2365]: E1031 13:34:33.536628 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:33.537569 kubelet[2365]: E1031 13:34:33.537554 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:33.537754 kubelet[2365]: E1031 13:34:33.537736 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:34.167288 kubelet[2365]: I1031 13:34:34.166967 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:34:34.539947 kubelet[2365]: E1031 13:34:34.539743 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:34.539947 kubelet[2365]: E1031 13:34:34.539864 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:34.540385 kubelet[2365]: E1031 13:34:34.540370 2365 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:34:34.540578 kubelet[2365]: E1031 13:34:34.540561 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:35.379275 kubelet[2365]: E1031 13:34:35.378342 2365 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 13:34:35.502717 kubelet[2365]: I1031 13:34:35.502673 2365 apiserver.go:52] "Watching apiserver" Oct 31 13:34:35.506434 kubelet[2365]: I1031 13:34:35.506399 2365 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 13:34:35.509482 kubelet[2365]: I1031 13:34:35.509417 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:35.509482 kubelet[2365]: I1031 13:34:35.509449 2365 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 13:34:35.567455 kubelet[2365]: E1031 13:34:35.567400 2365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:35.567778 kubelet[2365]: I1031 13:34:35.567590 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 13:34:35.570279 kubelet[2365]: E1031 13:34:35.569419 2365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 13:34:35.570591 kubelet[2365]: I1031 13:34:35.570403 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:35.572434 kubelet[2365]: E1031 13:34:35.572414 2365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:36.716688 kubelet[2365]: I1031 13:34:36.716658 2365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:36.721666 kubelet[2365]: E1031 13:34:36.721627 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:37.240584 systemd[1]: Reload requested from client PID 2655 ('systemctl') (unit session-7.scope)... Oct 31 13:34:37.240601 systemd[1]: Reloading... Oct 31 13:34:37.303284 zram_generator::config[2699]: No configuration found. Oct 31 13:34:37.480709 systemd[1]: Reloading finished in 239 ms. Oct 31 13:34:37.509568 kubelet[2365]: I1031 13:34:37.509304 2365 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 13:34:37.509610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:37.523249 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 13:34:37.523543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:37.523605 systemd[1]: kubelet.service: Consumed 1.050s CPU time, 128.2M memory peak. Oct 31 13:34:37.525374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:34:37.659786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:34:37.663785 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 13:34:37.701847 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 13:34:37.701847 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 13:34:37.701847 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 13:34:37.702175 kubelet[2741]: I1031 13:34:37.701886 2741 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 13:34:37.708745 kubelet[2741]: I1031 13:34:37.708698 2741 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 13:34:37.708745 kubelet[2741]: I1031 13:34:37.708727 2741 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 13:34:37.709396 kubelet[2741]: I1031 13:34:37.708931 2741 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 13:34:37.710195 kubelet[2741]: I1031 13:34:37.710173 2741 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 13:34:37.714437 kubelet[2741]: I1031 13:34:37.714266 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 13:34:37.722036 kubelet[2741]: I1031 13:34:37.721983 2741 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 13:34:37.725352 kubelet[2741]: I1031 13:34:37.725324 2741 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 13:34:37.725693 kubelet[2741]: I1031 13:34:37.725659 2741 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 13:34:37.725935 kubelet[2741]: I1031 13:34:37.725772 2741 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 13:34:37.726054 kubelet[2741]: I1031 13:34:37.726039 2741 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 13:34:37.726112 kubelet[2741]: I1031 13:34:37.726102 2741 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 13:34:37.726219 kubelet[2741]: I1031 13:34:37.726207 2741 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:34:37.726473 kubelet[2741]: I1031 13:34:37.726449 2741 kubelet.go:480] "Attempting to sync node with API server" Oct 31 13:34:37.726548 kubelet[2741]: I1031 13:34:37.726536 2741 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 13:34:37.726657 kubelet[2741]: I1031 13:34:37.726646 2741 kubelet.go:386] "Adding apiserver pod source" Oct 31 13:34:37.726836 kubelet[2741]: I1031 13:34:37.726823 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 13:34:37.728097 kubelet[2741]: I1031 13:34:37.728073 2741 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 13:34:37.729042 kubelet[2741]: I1031 13:34:37.729016 2741 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 13:34:37.732886 kubelet[2741]: I1031 13:34:37.732863 2741 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 13:34:37.733007 kubelet[2741]: I1031 13:34:37.732996 2741 server.go:1289] "Started kubelet" Oct 31 13:34:37.734292 kubelet[2741]: I1031 13:34:37.734084 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 13:34:37.734292 kubelet[2741]: I1031 13:34:37.734193 2741 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 13:34:37.735301 kubelet[2741]: I1031 13:34:37.735208 2741 server.go:317] "Adding debug handlers to kubelet server" Oct 31 13:34:37.737339 kubelet[2741]: I1031 13:34:37.736506 2741 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 13:34:37.740436 kubelet[2741]: I1031 13:34:37.738995 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 13:34:37.741514 kubelet[2741]: I1031 13:34:37.741495 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 13:34:37.741933 kubelet[2741]: E1031 13:34:37.741905 2741 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:34:37.742630 kubelet[2741]: I1031 13:34:37.742423 2741 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 13:34:37.742814 kubelet[2741]: I1031 13:34:37.742797 2741 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 13:34:37.743003 kubelet[2741]: I1031 13:34:37.742989 2741 reconciler.go:26] "Reconciler: start to sync state" Oct 31 13:34:37.747704 kubelet[2741]: I1031 13:34:37.747649 2741 factory.go:223] Registration of the containerd container factory successfully Oct 31 13:34:37.747704 kubelet[2741]: I1031 13:34:37.747682 2741 factory.go:223] Registration of the systemd container factory successfully Oct 31 13:34:37.747817 kubelet[2741]: I1031 13:34:37.747760 2741 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 13:34:37.751798 kubelet[2741]: E1031 13:34:37.751771 2741 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 13:34:37.756766 kubelet[2741]: I1031 13:34:37.756724 2741 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 13:34:37.757799 kubelet[2741]: I1031 13:34:37.757778 2741 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 13:34:37.757887 kubelet[2741]: I1031 13:34:37.757878 2741 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 13:34:37.757957 kubelet[2741]: I1031 13:34:37.757948 2741 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 13:34:37.758003 kubelet[2741]: I1031 13:34:37.757996 2741 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 13:34:37.758092 kubelet[2741]: E1031 13:34:37.758076 2741 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 13:34:37.790337 kubelet[2741]: I1031 13:34:37.790236 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790484 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790523 2741 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790659 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790669 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790687 2741 policy_none.go:49] "None policy: Start" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790697 2741 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790706 2741 state_mem.go:35] "Initializing new in-memory state store" Oct 31 13:34:37.791313 kubelet[2741]: I1031 13:34:37.790793 2741 state_mem.go:75] "Updated machine memory state" Oct 31 13:34:37.795507 kubelet[2741]: E1031 13:34:37.795479 2741 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 13:34:37.795680 kubelet[2741]: I1031 13:34:37.795657 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 13:34:37.795719 kubelet[2741]: I1031 13:34:37.795678 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 13:34:37.795982 kubelet[2741]: I1031 13:34:37.795963 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 13:34:37.797042 kubelet[2741]: E1031 13:34:37.796871 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 13:34:37.859219 kubelet[2741]: I1031 13:34:37.859184 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:37.859404 kubelet[2741]: I1031 13:34:37.859380 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:37.859547 kubelet[2741]: I1031 13:34:37.859350 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 13:34:37.865349 kubelet[2741]: E1031 13:34:37.865322 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:37.899671 kubelet[2741]: I1031 13:34:37.899647 2741 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:34:37.906913 kubelet[2741]: I1031 13:34:37.906882 2741 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 13:34:37.907182 kubelet[2741]: I1031 13:34:37.907167 2741 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 13:34:37.945663 kubelet[2741]: I1031 13:34:37.944190 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/606e96b964506b0f1a9fd78a6d24b5a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"606e96b964506b0f1a9fd78a6d24b5a1\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:37.945663 kubelet[2741]: I1031 13:34:37.944235 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:37.945663 kubelet[2741]: I1031 13:34:37.944274 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:37.945663 kubelet[2741]: I1031 13:34:37.944299 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:37.945663 kubelet[2741]: I1031 13:34:37.944320 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 13:34:37.945869 kubelet[2741]: I1031 13:34:37.944335 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/606e96b964506b0f1a9fd78a6d24b5a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"606e96b964506b0f1a9fd78a6d24b5a1\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:37.945869 kubelet[2741]: I1031 13:34:37.944350 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/606e96b964506b0f1a9fd78a6d24b5a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"606e96b964506b0f1a9fd78a6d24b5a1\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:37.945869 kubelet[2741]: I1031 13:34:37.944367 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:37.945869 kubelet[2741]: I1031 13:34:37.944382 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:38.163944 kubelet[2741]: E1031 13:34:38.163837 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:38.166118 kubelet[2741]: E1031 13:34:38.166064 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:38.166207 kubelet[2741]: E1031 13:34:38.166066 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:38.727484 kubelet[2741]: I1031 13:34:38.727437 2741 apiserver.go:52] "Watching apiserver" Oct 31 13:34:38.743215 kubelet[2741]: I1031 13:34:38.743176 2741 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 13:34:38.768284 kubelet[2741]: I1031 13:34:38.768190 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.768157237 podStartE2EDuration="1.768157237s" podCreationTimestamp="2025-10-31 13:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:34:38.767272564 +0000 UTC m=+1.099821731" watchObservedRunningTime="2025-10-31 13:34:38.768157237 +0000 UTC m=+1.100706404" Oct 31 13:34:38.773651 kubelet[2741]: I1031 13:34:38.773611 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.773601031 podStartE2EDuration="1.773601031s" podCreationTimestamp="2025-10-31 13:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:34:38.7735801 +0000 UTC m=+1.106129267" watchObservedRunningTime="2025-10-31 13:34:38.773601031 +0000 UTC m=+1.106150198" Oct 31 13:34:38.778675 kubelet[2741]: I1031 13:34:38.778658 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:38.778763 kubelet[2741]: I1031 13:34:38.778740 2741 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:38.779214 kubelet[2741]: E1031 13:34:38.779190 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:38.785173 kubelet[2741]: E1031 13:34:38.785145 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:34:38.785349 kubelet[2741]: E1031 13:34:38.785333 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:38.785654 kubelet[2741]: I1031 13:34:38.785621 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.785607817 podStartE2EDuration="2.785607817s" podCreationTimestamp="2025-10-31 13:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:34:38.785360965 +0000 UTC m=+1.117910132" watchObservedRunningTime="2025-10-31 13:34:38.785607817 +0000 UTC m=+1.118156984" Oct 31 13:34:38.788949 kubelet[2741]: E1031 13:34:38.788800 2741 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 13:34:38.789055 kubelet[2741]: E1031 13:34:38.789039 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:39.780015 kubelet[2741]: E1031 13:34:39.779982 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:39.780508 kubelet[2741]: E1031 13:34:39.780118 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:39.780583 kubelet[2741]: E1031 13:34:39.780536 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:40.780947 kubelet[2741]: E1031 13:34:40.780917 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:42.727647 kubelet[2741]: I1031 13:34:42.727598 2741 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 13:34:42.728213 kubelet[2741]: I1031 13:34:42.728158 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 13:34:42.728254 containerd[1582]: time="2025-10-31T13:34:42.727937656Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 13:34:43.812460 systemd[1]: Created slice kubepods-besteffort-podcd450499_4583_4644_8b5e_08602e1ec17a.slice - libcontainer container kubepods-besteffort-podcd450499_4583_4644_8b5e_08602e1ec17a.slice. Oct 31 13:34:43.897750 kubelet[2741]: I1031 13:34:43.897585 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd450499-4583-4644-8b5e-08602e1ec17a-lib-modules\") pod \"kube-proxy-wnrmp\" (UID: \"cd450499-4583-4644-8b5e-08602e1ec17a\") " pod="kube-system/kube-proxy-wnrmp" Oct 31 13:34:43.897750 kubelet[2741]: I1031 13:34:43.897658 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftc74\" (UniqueName: \"kubernetes.io/projected/cd450499-4583-4644-8b5e-08602e1ec17a-kube-api-access-ftc74\") pod \"kube-proxy-wnrmp\" (UID: \"cd450499-4583-4644-8b5e-08602e1ec17a\") " pod="kube-system/kube-proxy-wnrmp" Oct 31 13:34:43.897750 kubelet[2741]: I1031 13:34:43.897698 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd450499-4583-4644-8b5e-08602e1ec17a-kube-proxy\") pod \"kube-proxy-wnrmp\" (UID: \"cd450499-4583-4644-8b5e-08602e1ec17a\") " pod="kube-system/kube-proxy-wnrmp" Oct 31 13:34:43.897750 kubelet[2741]: I1031 13:34:43.897717 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd450499-4583-4644-8b5e-08602e1ec17a-xtables-lock\") pod \"kube-proxy-wnrmp\" (UID: \"cd450499-4583-4644-8b5e-08602e1ec17a\") " pod="kube-system/kube-proxy-wnrmp" Oct 31 13:34:43.930577 systemd[1]: Created slice kubepods-besteffort-podc6ec0d1e_ca9d_45d3_8931_003c818edb32.slice - libcontainer container kubepods-besteffort-podc6ec0d1e_ca9d_45d3_8931_003c818edb32.slice. Oct 31 13:34:44.000957 kubelet[2741]: I1031 13:34:44.000911 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6ec0d1e-ca9d-45d3-8931-003c818edb32-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g9tv6\" (UID: \"c6ec0d1e-ca9d-45d3-8931-003c818edb32\") " pod="tigera-operator/tigera-operator-7dcd859c48-g9tv6" Oct 31 13:34:44.001068 kubelet[2741]: I1031 13:34:44.000986 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkg2\" (UniqueName: \"kubernetes.io/projected/c6ec0d1e-ca9d-45d3-8931-003c818edb32-kube-api-access-dpkg2\") pod \"tigera-operator-7dcd859c48-g9tv6\" (UID: \"c6ec0d1e-ca9d-45d3-8931-003c818edb32\") " pod="tigera-operator/tigera-operator-7dcd859c48-g9tv6" Oct 31 13:34:44.139668 kubelet[2741]: E1031 13:34:44.138656 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:44.140511 containerd[1582]: time="2025-10-31T13:34:44.140353558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnrmp,Uid:cd450499-4583-4644-8b5e-08602e1ec17a,Namespace:kube-system,Attempt:0,}" Oct 31 13:34:44.156242 containerd[1582]: time="2025-10-31T13:34:44.156208127Z" level=info msg="connecting to shim 738336fbb714aaa3e4dca038375e091a93b733bb18deacf34bf5ec661c4e3098" address="unix:///run/containerd/s/7d64384a36295dfd7fddd56abc5339cca69699df33097150574a49e2928cf8bd" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:34:44.179411 systemd[1]: Started cri-containerd-738336fbb714aaa3e4dca038375e091a93b733bb18deacf34bf5ec661c4e3098.scope - libcontainer container 738336fbb714aaa3e4dca038375e091a93b733bb18deacf34bf5ec661c4e3098. Oct 31 13:34:44.199042 containerd[1582]: time="2025-10-31T13:34:44.199008793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnrmp,Uid:cd450499-4583-4644-8b5e-08602e1ec17a,Namespace:kube-system,Attempt:0,} returns sandbox id \"738336fbb714aaa3e4dca038375e091a93b733bb18deacf34bf5ec661c4e3098\"" Oct 31 13:34:44.199767 kubelet[2741]: E1031 13:34:44.199730 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:44.203827 containerd[1582]: time="2025-10-31T13:34:44.203783319Z" level=info msg="CreateContainer within sandbox \"738336fbb714aaa3e4dca038375e091a93b733bb18deacf34bf5ec661c4e3098\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 13:34:44.212308 containerd[1582]: time="2025-10-31T13:34:44.211597460Z" level=info msg="Container ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:34:44.218940 containerd[1582]: time="2025-10-31T13:34:44.218895881Z" level=info msg="CreateContainer within sandbox \"738336fbb714aaa3e4dca038375e091a93b733bb18deacf34bf5ec661c4e3098\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44\"" Oct 31 13:34:44.219410 containerd[1582]: time="2025-10-31T13:34:44.219372786Z" level=info msg="StartContainer for \"ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44\"" Oct 31 13:34:44.221037 containerd[1582]: time="2025-10-31T13:34:44.220987690Z" level=info msg="connecting to shim ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44" address="unix:///run/containerd/s/7d64384a36295dfd7fddd56abc5339cca69699df33097150574a49e2928cf8bd" protocol=ttrpc version=3 Oct 31 13:34:44.233811 containerd[1582]: time="2025-10-31T13:34:44.233747143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g9tv6,Uid:c6ec0d1e-ca9d-45d3-8931-003c818edb32,Namespace:tigera-operator,Attempt:0,}" Oct 31 13:34:44.242410 systemd[1]: Started cri-containerd-ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44.scope - libcontainer container ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44. Oct 31 13:34:44.251158 containerd[1582]: time="2025-10-31T13:34:44.251105333Z" level=info msg="connecting to shim 70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185" address="unix:///run/containerd/s/d41f7cfcb41b49d5fcdc9c6da5cf6079ad74acae98af444ed5d2dc435efe3747" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:34:44.275048 systemd[1]: Started cri-containerd-70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185.scope - libcontainer container 70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185. Oct 31 13:34:44.282339 containerd[1582]: time="2025-10-31T13:34:44.282292269Z" level=info msg="StartContainer for \"ba4f2bcf8d16a77fbf6ecf380418f21aae684f64e01714050922c150b21a0e44\" returns successfully" Oct 31 13:34:44.312783 containerd[1582]: time="2025-10-31T13:34:44.312748083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g9tv6,Uid:c6ec0d1e-ca9d-45d3-8931-003c818edb32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185\"" Oct 31 13:34:44.316527 containerd[1582]: time="2025-10-31T13:34:44.316499573Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 13:34:44.793197 kubelet[2741]: E1031 13:34:44.792958 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:44.803116 kubelet[2741]: I1031 13:34:44.802354 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wnrmp" podStartSLOduration=1.8023396310000002 podStartE2EDuration="1.802339631s" podCreationTimestamp="2025-10-31 13:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:34:44.80153596 +0000 UTC m=+7.134085127" watchObservedRunningTime="2025-10-31 13:34:44.802339631 +0000 UTC m=+7.134888798" Oct 31 13:34:44.935998 kubelet[2741]: E1031 13:34:44.935087 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:45.015407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287245059.mount: Deactivated successfully. Oct 31 13:34:45.498469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463109650.mount: Deactivated successfully. Oct 31 13:34:45.796694 kubelet[2741]: E1031 13:34:45.796604 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:46.363134 containerd[1582]: time="2025-10-31T13:34:46.363072444Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:46.364289 containerd[1582]: time="2025-10-31T13:34:46.364234046Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 31 13:34:46.365286 containerd[1582]: time="2025-10-31T13:34:46.365113431Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:46.367350 containerd[1582]: time="2025-10-31T13:34:46.367308031Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:34:46.368032 containerd[1582]: time="2025-10-31T13:34:46.367926525Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.051394739s" Oct 31 13:34:46.368032 containerd[1582]: time="2025-10-31T13:34:46.367954614Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 31 13:34:46.373195 containerd[1582]: time="2025-10-31T13:34:46.373166779Z" level=info msg="CreateContainer within sandbox \"70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 13:34:46.380336 containerd[1582]: time="2025-10-31T13:34:46.378370462Z" level=info msg="Container e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:34:46.383585 containerd[1582]: time="2025-10-31T13:34:46.383544773Z" level=info msg="CreateContainer within sandbox \"70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\"" Oct 31 13:34:46.383936 containerd[1582]: time="2025-10-31T13:34:46.383915022Z" level=info msg="StartContainer for \"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\"" Oct 31 13:34:46.384620 containerd[1582]: time="2025-10-31T13:34:46.384586014Z" level=info msg="connecting to shim e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70" address="unix:///run/containerd/s/d41f7cfcb41b49d5fcdc9c6da5cf6079ad74acae98af444ed5d2dc435efe3747" protocol=ttrpc version=3 Oct 31 13:34:46.420406 systemd[1]: Started cri-containerd-e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70.scope - libcontainer container e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70. Oct 31 13:34:46.446958 containerd[1582]: time="2025-10-31T13:34:46.446915679Z" level=info msg="StartContainer for \"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\" returns successfully" Oct 31 13:34:46.799241 kubelet[2741]: E1031 13:34:46.799206 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:46.807143 kubelet[2741]: I1031 13:34:46.807076 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g9tv6" podStartSLOduration=1.7527798049999999 podStartE2EDuration="3.807061759s" podCreationTimestamp="2025-10-31 13:34:43 +0000 UTC" firstStartedPulling="2025-10-31 13:34:44.316138474 +0000 UTC m=+6.648687641" lastFinishedPulling="2025-10-31 13:34:46.370420428 +0000 UTC m=+8.702969595" observedRunningTime="2025-10-31 13:34:46.80691839 +0000 UTC m=+9.139467557" watchObservedRunningTime="2025-10-31 13:34:46.807061759 +0000 UTC m=+9.139610886" Oct 31 13:34:48.106274 kubelet[2741]: E1031 13:34:48.104193 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:48.412814 systemd[1]: cri-containerd-e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70.scope: Deactivated successfully. Oct 31 13:34:48.433382 containerd[1582]: time="2025-10-31T13:34:48.433339577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\" id:\"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\" pid:3073 exit_status:1 exited_at:{seconds:1761917688 nanos:432892078}" Oct 31 13:34:48.438554 containerd[1582]: time="2025-10-31T13:34:48.438493179Z" level=info msg="received exit event container_id:\"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\" id:\"e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70\" pid:3073 exit_status:1 exited_at:{seconds:1761917688 nanos:432892078}" Oct 31 13:34:48.485682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70-rootfs.mount: Deactivated successfully. Oct 31 13:34:48.670220 kubelet[2741]: E1031 13:34:48.670114 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:48.805690 kubelet[2741]: E1031 13:34:48.805347 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:48.805690 kubelet[2741]: E1031 13:34:48.805447 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:34:48.810480 kubelet[2741]: I1031 13:34:48.810452 2741 scope.go:117] "RemoveContainer" containerID="e690dc433cd281d108848fde223676160509251fbf60bb7e9ce5ec7e044e7a70" Oct 31 13:34:48.814272 containerd[1582]: time="2025-10-31T13:34:48.814001325Z" level=info msg="CreateContainer within sandbox \"70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 31 13:34:48.845170 containerd[1582]: time="2025-10-31T13:34:48.844612082Z" level=info msg="Container 838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:34:48.851491 containerd[1582]: time="2025-10-31T13:34:48.851446967Z" level=info msg="CreateContainer within sandbox \"70fa9ff6aede0ced6785391e82808adf7fb648869cd0dd94b2743ec5cb43c185\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7\"" Oct 31 13:34:48.852364 containerd[1582]: time="2025-10-31T13:34:48.852338324Z" level=info msg="StartContainer for \"838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7\"" Oct 31 13:34:48.853390 containerd[1582]: time="2025-10-31T13:34:48.853364363Z" level=info msg="connecting to shim 838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7" address="unix:///run/containerd/s/d41f7cfcb41b49d5fcdc9c6da5cf6079ad74acae98af444ed5d2dc435efe3747" protocol=ttrpc version=3 Oct 31 13:34:48.877492 systemd[1]: Started cri-containerd-838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7.scope - libcontainer container 838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7. Oct 31 13:34:48.907184 containerd[1582]: time="2025-10-31T13:34:48.907124357Z" level=info msg="StartContainer for \"838569b2d76d8a1dab571676a1a7515c6003b92dd37fcfafa7d9e2df6649d5e7\" returns successfully" Oct 31 13:34:51.847177 sudo[1784]: pam_unix(sudo:session): session closed for user root Oct 31 13:34:51.849265 sshd[1783]: Connection closed by 10.0.0.1 port 60794 Oct 31 13:34:51.849737 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Oct 31 13:34:51.854746 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:60794.service: Deactivated successfully. Oct 31 13:34:51.857811 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 13:34:51.857995 systemd[1]: session-7.scope: Consumed 7.984s CPU time, 212.4M memory peak. Oct 31 13:34:51.859712 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Oct 31 13:34:51.861229 systemd-logind[1552]: Removed session 7. Oct 31 13:34:53.505025 update_engine[1555]: I20251031 13:34:53.504953 1555 update_attempter.cc:509] Updating boot flags... Oct 31 13:34:59.886477 systemd[1]: Created slice kubepods-besteffort-pod75146c56_d887_43b0_aa3d_36f3195610ac.slice - libcontainer container kubepods-besteffort-pod75146c56_d887_43b0_aa3d_36f3195610ac.slice. Oct 31 13:34:59.917849 kubelet[2741]: I1031 13:34:59.917811 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75146c56-d887-43b0-aa3d-36f3195610ac-typha-certs\") pod \"calico-typha-5cb49f9ff-xhtzj\" (UID: \"75146c56-d887-43b0-aa3d-36f3195610ac\") " pod="calico-system/calico-typha-5cb49f9ff-xhtzj" Oct 31 13:34:59.918448 kubelet[2741]: I1031 13:34:59.918402 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75146c56-d887-43b0-aa3d-36f3195610ac-tigera-ca-bundle\") pod \"calico-typha-5cb49f9ff-xhtzj\" (UID: \"75146c56-d887-43b0-aa3d-36f3195610ac\") " pod="calico-system/calico-typha-5cb49f9ff-xhtzj" Oct 31 13:34:59.918577 kubelet[2741]: I1031 13:34:59.918512 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pclc\" (UniqueName: \"kubernetes.io/projected/75146c56-d887-43b0-aa3d-36f3195610ac-kube-api-access-9pclc\") pod \"calico-typha-5cb49f9ff-xhtzj\" (UID: \"75146c56-d887-43b0-aa3d-36f3195610ac\") " pod="calico-system/calico-typha-5cb49f9ff-xhtzj" Oct 31 13:35:00.063691 systemd[1]: Created slice kubepods-besteffort-pod9a25ead5_3f4e_4dec_adb2_8def6cd62bc4.slice - libcontainer container kubepods-besteffort-pod9a25ead5_3f4e_4dec_adb2_8def6cd62bc4.slice. Oct 31 13:35:00.120051 kubelet[2741]: I1031 13:35:00.119997 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-tigera-ca-bundle\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120051 kubelet[2741]: I1031 13:35:00.120046 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-flexvol-driver-host\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120207 kubelet[2741]: I1031 13:35:00.120067 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-xtables-lock\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120207 kubelet[2741]: I1031 13:35:00.120084 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4gq\" (UniqueName: \"kubernetes.io/projected/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-kube-api-access-nl4gq\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120207 kubelet[2741]: I1031 13:35:00.120102 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-lib-modules\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120207 kubelet[2741]: I1031 13:35:00.120146 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-cni-bin-dir\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120207 kubelet[2741]: I1031 13:35:00.120163 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-cni-log-dir\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120333 kubelet[2741]: I1031 13:35:00.120178 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-cni-net-dir\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120333 kubelet[2741]: I1031 13:35:00.120193 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-node-certs\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120333 kubelet[2741]: I1031 13:35:00.120220 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-policysync\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120333 kubelet[2741]: I1031 13:35:00.120236 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-var-lib-calico\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.120333 kubelet[2741]: I1031 13:35:00.120256 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9a25ead5-3f4e-4dec-adb2-8def6cd62bc4-var-run-calico\") pod \"calico-node-tfb77\" (UID: \"9a25ead5-3f4e-4dec-adb2-8def6cd62bc4\") " pod="calico-system/calico-node-tfb77" Oct 31 13:35:00.192338 kubelet[2741]: E1031 13:35:00.191841 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:00.192667 containerd[1582]: time="2025-10-31T13:35:00.192630318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb49f9ff-xhtzj,Uid:75146c56-d887-43b0-aa3d-36f3195610ac,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:00.223355 kubelet[2741]: E1031 13:35:00.222763 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.223355 kubelet[2741]: W1031 13:35:00.222790 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.225349 kubelet[2741]: E1031 13:35:00.225310 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.231707 kubelet[2741]: E1031 13:35:00.231684 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.231851 kubelet[2741]: W1031 13:35:00.231836 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.231935 kubelet[2741]: E1031 13:35:00.231922 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.233721 containerd[1582]: time="2025-10-31T13:35:00.233684314Z" level=info msg="connecting to shim d002f3258fbf4846af3359c594418ae58c79465845458b74feda7677f2a4c113" address="unix:///run/containerd/s/2acd975792b3cb1fa977b36004f26d832033addde3306b8c00fc60330417dcda" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:00.244118 kubelet[2741]: E1031 13:35:00.244092 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.244249 kubelet[2741]: W1031 13:35:00.244233 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.244335 kubelet[2741]: E1031 13:35:00.244305 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.261514 systemd[1]: Started cri-containerd-d002f3258fbf4846af3359c594418ae58c79465845458b74feda7677f2a4c113.scope - libcontainer container d002f3258fbf4846af3359c594418ae58c79465845458b74feda7677f2a4c113. Oct 31 13:35:00.275493 kubelet[2741]: E1031 13:35:00.275433 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:00.301287 kubelet[2741]: E1031 13:35:00.300927 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.301287 kubelet[2741]: W1031 13:35:00.300955 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.301287 kubelet[2741]: E1031 13:35:00.300973 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.301539 kubelet[2741]: E1031 13:35:00.301304 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.301539 kubelet[2741]: W1031 13:35:00.301315 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.301539 kubelet[2741]: E1031 13:35:00.301356 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.301539 kubelet[2741]: E1031 13:35:00.301525 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.301539 kubelet[2741]: W1031 13:35:00.301534 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.301650 kubelet[2741]: E1031 13:35:00.301545 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.301699 kubelet[2741]: E1031 13:35:00.301682 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.301699 kubelet[2741]: W1031 13:35:00.301693 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.301748 kubelet[2741]: E1031 13:35:00.301701 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.301860 kubelet[2741]: E1031 13:35:00.301845 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.301860 kubelet[2741]: W1031 13:35:00.301857 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.301912 kubelet[2741]: E1031 13:35:00.301865 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.301995 kubelet[2741]: E1031 13:35:00.301981 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.301995 kubelet[2741]: W1031 13:35:00.301990 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302041 kubelet[2741]: E1031 13:35:00.301997 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.302124 kubelet[2741]: E1031 13:35:00.302110 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.302124 kubelet[2741]: W1031 13:35:00.302120 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302177 kubelet[2741]: E1031 13:35:00.302127 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.302297 kubelet[2741]: E1031 13:35:00.302282 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.302297 kubelet[2741]: W1031 13:35:00.302294 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302354 kubelet[2741]: E1031 13:35:00.302303 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.302457 kubelet[2741]: E1031 13:35:00.302441 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.302457 kubelet[2741]: W1031 13:35:00.302451 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302507 kubelet[2741]: E1031 13:35:00.302459 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.302592 kubelet[2741]: E1031 13:35:00.302578 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.302592 kubelet[2741]: W1031 13:35:00.302588 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302653 kubelet[2741]: E1031 13:35:00.302596 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.302727 kubelet[2741]: E1031 13:35:00.302712 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.302727 kubelet[2741]: W1031 13:35:00.302725 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302778 kubelet[2741]: E1031 13:35:00.302733 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.302929 kubelet[2741]: E1031 13:35:00.302911 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.302929 kubelet[2741]: W1031 13:35:00.302922 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.302982 kubelet[2741]: E1031 13:35:00.302932 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303085 kubelet[2741]: E1031 13:35:00.303069 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303085 kubelet[2741]: W1031 13:35:00.303079 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303135 kubelet[2741]: E1031 13:35:00.303087 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303222 kubelet[2741]: E1031 13:35:00.303207 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303222 kubelet[2741]: W1031 13:35:00.303217 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303282 kubelet[2741]: E1031 13:35:00.303225 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303364 kubelet[2741]: E1031 13:35:00.303349 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303364 kubelet[2741]: W1031 13:35:00.303360 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303421 kubelet[2741]: E1031 13:35:00.303368 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303499 kubelet[2741]: E1031 13:35:00.303484 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303499 kubelet[2741]: W1031 13:35:00.303494 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303546 kubelet[2741]: E1031 13:35:00.303502 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303651 kubelet[2741]: E1031 13:35:00.303626 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303651 kubelet[2741]: W1031 13:35:00.303643 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303651 kubelet[2741]: E1031 13:35:00.303651 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303779 kubelet[2741]: E1031 13:35:00.303764 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303779 kubelet[2741]: W1031 13:35:00.303774 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303830 kubelet[2741]: E1031 13:35:00.303781 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.303907 kubelet[2741]: E1031 13:35:00.303893 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.303907 kubelet[2741]: W1031 13:35:00.303902 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.303995 kubelet[2741]: E1031 13:35:00.303911 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.304034 kubelet[2741]: E1031 13:35:00.304020 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.304034 kubelet[2741]: W1031 13:35:00.304028 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.304077 kubelet[2741]: E1031 13:35:00.304036 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.321881 kubelet[2741]: E1031 13:35:00.321778 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.321881 kubelet[2741]: W1031 13:35:00.321861 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.322052 kubelet[2741]: E1031 13:35:00.321990 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.322052 kubelet[2741]: I1031 13:35:00.322035 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd-socket-dir\") pod \"csi-node-driver-skbn9\" (UID: \"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd\") " pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:00.322548 kubelet[2741]: E1031 13:35:00.322521 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.322548 kubelet[2741]: W1031 13:35:00.322542 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.322647 kubelet[2741]: E1031 13:35:00.322556 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.322647 kubelet[2741]: I1031 13:35:00.322582 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54xx\" (UniqueName: \"kubernetes.io/projected/8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd-kube-api-access-s54xx\") pod \"csi-node-driver-skbn9\" (UID: \"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd\") " pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:00.322888 kubelet[2741]: E1031 13:35:00.322869 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.322888 kubelet[2741]: W1031 13:35:00.322884 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.322946 kubelet[2741]: E1031 13:35:00.322913 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.322946 kubelet[2741]: I1031 13:35:00.322933 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd-registration-dir\") pod \"csi-node-driver-skbn9\" (UID: \"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd\") " pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:00.323138 kubelet[2741]: E1031 13:35:00.323109 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.323138 kubelet[2741]: W1031 13:35:00.323120 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.323138 kubelet[2741]: E1031 13:35:00.323128 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.323272 kubelet[2741]: I1031 13:35:00.323143 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd-kubelet-dir\") pod \"csi-node-driver-skbn9\" (UID: \"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd\") " pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:00.323481 kubelet[2741]: E1031 13:35:00.323462 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.323481 kubelet[2741]: W1031 13:35:00.323481 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.323558 kubelet[2741]: E1031 13:35:00.323496 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.323689 kubelet[2741]: E1031 13:35:00.323675 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.323689 kubelet[2741]: W1031 13:35:00.323686 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.323763 kubelet[2741]: E1031 13:35:00.323695 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.323859 kubelet[2741]: E1031 13:35:00.323848 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.323859 kubelet[2741]: W1031 13:35:00.323859 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.323919 kubelet[2741]: E1031 13:35:00.323866 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.324015 kubelet[2741]: E1031 13:35:00.324000 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.324015 kubelet[2741]: W1031 13:35:00.324011 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.324070 kubelet[2741]: E1031 13:35:00.324020 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.324189 kubelet[2741]: E1031 13:35:00.324176 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.324189 kubelet[2741]: W1031 13:35:00.324186 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.324248 kubelet[2741]: E1031 13:35:00.324194 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.324337 kubelet[2741]: E1031 13:35:00.324322 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.324337 kubelet[2741]: W1031 13:35:00.324335 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.324524 kubelet[2741]: E1031 13:35:00.324343 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.324884 kubelet[2741]: E1031 13:35:00.324865 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.325034 kubelet[2741]: W1031 13:35:00.324956 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.325034 kubelet[2741]: E1031 13:35:00.324977 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.325034 kubelet[2741]: I1031 13:35:00.325007 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd-varrun\") pod \"csi-node-driver-skbn9\" (UID: \"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd\") " pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:00.325282 kubelet[2741]: E1031 13:35:00.325248 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.325316 kubelet[2741]: W1031 13:35:00.325282 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.325316 kubelet[2741]: E1031 13:35:00.325296 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.325559 kubelet[2741]: E1031 13:35:00.325544 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.325588 kubelet[2741]: W1031 13:35:00.325559 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.325588 kubelet[2741]: E1031 13:35:00.325570 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.326109 kubelet[2741]: E1031 13:35:00.326054 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.326109 kubelet[2741]: W1031 13:35:00.326076 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.326109 kubelet[2741]: E1031 13:35:00.326090 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.326370 kubelet[2741]: E1031 13:35:00.326350 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.326493 kubelet[2741]: W1031 13:35:00.326474 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.326539 kubelet[2741]: E1031 13:35:00.326497 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.351992 containerd[1582]: time="2025-10-31T13:35:00.351571233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb49f9ff-xhtzj,Uid:75146c56-d887-43b0-aa3d-36f3195610ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"d002f3258fbf4846af3359c594418ae58c79465845458b74feda7677f2a4c113\"" Oct 31 13:35:00.357649 kubelet[2741]: E1031 13:35:00.357614 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:00.361681 containerd[1582]: time="2025-10-31T13:35:00.361634368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 13:35:00.366782 kubelet[2741]: E1031 13:35:00.366716 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:00.367366 containerd[1582]: time="2025-10-31T13:35:00.367160480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfb77,Uid:9a25ead5-3f4e-4dec-adb2-8def6cd62bc4,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:00.385329 containerd[1582]: time="2025-10-31T13:35:00.385283164Z" level=info msg="connecting to shim eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6" address="unix:///run/containerd/s/04489eb71773d7180c9abb53b81d19f976139c83eff57589a5cfdf81e737c612" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:00.423488 systemd[1]: Started cri-containerd-eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6.scope - libcontainer container eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6. Oct 31 13:35:00.425933 kubelet[2741]: E1031 13:35:00.425906 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.425933 kubelet[2741]: W1031 13:35:00.425932 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.426079 kubelet[2741]: E1031 13:35:00.425952 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.426816 kubelet[2741]: E1031 13:35:00.426404 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.426816 kubelet[2741]: W1031 13:35:00.426421 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.426816 kubelet[2741]: E1031 13:35:00.426446 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.426816 kubelet[2741]: E1031 13:35:00.426672 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.426816 kubelet[2741]: W1031 13:35:00.426692 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.426816 kubelet[2741]: E1031 13:35:00.426702 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.426999 kubelet[2741]: E1031 13:35:00.426935 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.426999 kubelet[2741]: W1031 13:35:00.426951 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.426999 kubelet[2741]: E1031 13:35:00.426966 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.427267 kubelet[2741]: E1031 13:35:00.427206 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.427267 kubelet[2741]: W1031 13:35:00.427223 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.427338 kubelet[2741]: E1031 13:35:00.427270 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.427529 kubelet[2741]: E1031 13:35:00.427499 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.427529 kubelet[2741]: W1031 13:35:00.427513 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.427529 kubelet[2741]: E1031 13:35:00.427523 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.428422 kubelet[2741]: E1031 13:35:00.428397 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.428422 kubelet[2741]: W1031 13:35:00.428412 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.428422 kubelet[2741]: E1031 13:35:00.428424 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.428813 kubelet[2741]: E1031 13:35:00.428789 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.428813 kubelet[2741]: W1031 13:35:00.428811 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.428871 kubelet[2741]: E1031 13:35:00.428822 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.429021 kubelet[2741]: E1031 13:35:00.429008 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.429021 kubelet[2741]: W1031 13:35:00.429018 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.429166 kubelet[2741]: E1031 13:35:00.429027 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.429166 kubelet[2741]: E1031 13:35:00.429157 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.429166 kubelet[2741]: W1031 13:35:00.429164 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.429237 kubelet[2741]: E1031 13:35:00.429173 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.429525 kubelet[2741]: E1031 13:35:00.429456 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.429525 kubelet[2741]: W1031 13:35:00.429469 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.429525 kubelet[2741]: E1031 13:35:00.429478 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.429789 kubelet[2741]: E1031 13:35:00.429671 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.429789 kubelet[2741]: W1031 13:35:00.429682 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.429789 kubelet[2741]: E1031 13:35:00.429690 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.429994 kubelet[2741]: E1031 13:35:00.429868 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.429994 kubelet[2741]: W1031 13:35:00.429877 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.429994 kubelet[2741]: E1031 13:35:00.429885 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.430127 kubelet[2741]: E1031 13:35:00.430099 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.430127 kubelet[2741]: W1031 13:35:00.430118 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.430322 kubelet[2741]: E1031 13:35:00.430131 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.430538 kubelet[2741]: E1031 13:35:00.430362 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.430538 kubelet[2741]: W1031 13:35:00.430373 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.430538 kubelet[2741]: E1031 13:35:00.430382 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.431279 kubelet[2741]: E1031 13:35:00.431215 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.431345 kubelet[2741]: W1031 13:35:00.431245 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.431345 kubelet[2741]: E1031 13:35:00.431313 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.432043 kubelet[2741]: E1031 13:35:00.432016 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.432043 kubelet[2741]: W1031 13:35:00.432033 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.432043 kubelet[2741]: E1031 13:35:00.432047 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.432302 kubelet[2741]: E1031 13:35:00.432228 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.432302 kubelet[2741]: W1031 13:35:00.432236 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.432302 kubelet[2741]: E1031 13:35:00.432247 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.432745 kubelet[2741]: E1031 13:35:00.432564 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.432745 kubelet[2741]: W1031 13:35:00.432579 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.432745 kubelet[2741]: E1031 13:35:00.432591 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.432994 kubelet[2741]: E1031 13:35:00.432806 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.432994 kubelet[2741]: W1031 13:35:00.432815 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.432994 kubelet[2741]: E1031 13:35:00.432825 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433217 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.434310 kubelet[2741]: W1031 13:35:00.433234 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433297 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433496 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.434310 kubelet[2741]: W1031 13:35:00.433507 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433518 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433696 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.434310 kubelet[2741]: W1031 13:35:00.433706 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433716 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.434310 kubelet[2741]: E1031 13:35:00.433897 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.434554 kubelet[2741]: W1031 13:35:00.433905 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.434554 kubelet[2741]: E1031 13:35:00.433915 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.434554 kubelet[2741]: E1031 13:35:00.434079 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.434554 kubelet[2741]: W1031 13:35:00.434087 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.434554 kubelet[2741]: E1031 13:35:00.434095 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.446288 kubelet[2741]: E1031 13:35:00.443663 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:00.446288 kubelet[2741]: W1031 13:35:00.443690 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:00.446288 kubelet[2741]: E1031 13:35:00.443709 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:00.456596 containerd[1582]: time="2025-10-31T13:35:00.456558929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfb77,Uid:9a25ead5-3f4e-4dec-adb2-8def6cd62bc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\"" Oct 31 13:35:00.457367 kubelet[2741]: E1031 13:35:00.457347 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:01.573511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115195334.mount: Deactivated successfully. Oct 31 13:35:01.759438 kubelet[2741]: E1031 13:35:01.759325 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:03.032395 containerd[1582]: time="2025-10-31T13:35:03.032353406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 31 13:35:03.035549 containerd[1582]: time="2025-10-31T13:35:03.035510525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.673824108s" Oct 31 13:35:03.035549 containerd[1582]: time="2025-10-31T13:35:03.035547690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 31 13:35:03.036762 containerd[1582]: time="2025-10-31T13:35:03.036733510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 13:35:03.038588 containerd[1582]: time="2025-10-31T13:35:03.038471293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:03.042470 containerd[1582]: time="2025-10-31T13:35:03.039224047Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:03.042470 containerd[1582]: time="2025-10-31T13:35:03.039746246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:03.050490 containerd[1582]: time="2025-10-31T13:35:03.050458869Z" level=info msg="CreateContainer within sandbox \"d002f3258fbf4846af3359c594418ae58c79465845458b74feda7677f2a4c113\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 13:35:03.058303 containerd[1582]: time="2025-10-31T13:35:03.058146194Z" level=info msg="Container 1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:35:03.064632 containerd[1582]: time="2025-10-31T13:35:03.064592291Z" level=info msg="CreateContainer within sandbox \"d002f3258fbf4846af3359c594418ae58c79465845458b74feda7677f2a4c113\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f\"" Oct 31 13:35:03.065844 containerd[1582]: time="2025-10-31T13:35:03.065806155Z" level=info msg="StartContainer for \"1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f\"" Oct 31 13:35:03.067001 containerd[1582]: time="2025-10-31T13:35:03.066975892Z" level=info msg="connecting to shim 1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f" address="unix:///run/containerd/s/2acd975792b3cb1fa977b36004f26d832033addde3306b8c00fc60330417dcda" protocol=ttrpc version=3 Oct 31 13:35:03.090445 systemd[1]: Started cri-containerd-1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f.scope - libcontainer container 1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f. Oct 31 13:35:03.127120 containerd[1582]: time="2025-10-31T13:35:03.127067756Z" level=info msg="StartContainer for \"1fc2d778ebfd17989b45135957cf1e9f122d4723336eda7ae6fd73f704de3a0f\" returns successfully" Oct 31 13:35:03.759082 kubelet[2741]: E1031 13:35:03.758755 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:03.847830 kubelet[2741]: E1031 13:35:03.847775 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:03.857992 kubelet[2741]: I1031 13:35:03.857704 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cb49f9ff-xhtzj" podStartSLOduration=2.179697638 podStartE2EDuration="4.857690008s" podCreationTimestamp="2025-10-31 13:34:59 +0000 UTC" firstStartedPulling="2025-10-31 13:35:00.35828147 +0000 UTC m=+22.690830637" lastFinishedPulling="2025-10-31 13:35:03.03627384 +0000 UTC m=+25.368823007" observedRunningTime="2025-10-31 13:35:03.856922931 +0000 UTC m=+26.189472058" watchObservedRunningTime="2025-10-31 13:35:03.857690008 +0000 UTC m=+26.190239135" Oct 31 13:35:03.926246 kubelet[2741]: E1031 13:35:03.926217 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.926246 kubelet[2741]: W1031 13:35:03.926239 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.926413 kubelet[2741]: E1031 13:35:03.926267 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.926413 kubelet[2741]: E1031 13:35:03.926393 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.926458 kubelet[2741]: W1031 13:35:03.926401 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.926458 kubelet[2741]: E1031 13:35:03.926440 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.926586 kubelet[2741]: E1031 13:35:03.926576 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.926621 kubelet[2741]: W1031 13:35:03.926586 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.926621 kubelet[2741]: E1031 13:35:03.926594 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.926727 kubelet[2741]: E1031 13:35:03.926717 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.926727 kubelet[2741]: W1031 13:35:03.926727 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.926770 kubelet[2741]: E1031 13:35:03.926736 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.926873 kubelet[2741]: E1031 13:35:03.926863 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.926905 kubelet[2741]: W1031 13:35:03.926875 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.926905 kubelet[2741]: E1031 13:35:03.926883 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927004 kubelet[2741]: E1031 13:35:03.926994 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927004 kubelet[2741]: W1031 13:35:03.927004 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927065 kubelet[2741]: E1031 13:35:03.927011 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927145 kubelet[2741]: E1031 13:35:03.927135 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927145 kubelet[2741]: W1031 13:35:03.927145 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927202 kubelet[2741]: E1031 13:35:03.927152 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927333 kubelet[2741]: E1031 13:35:03.927323 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927333 kubelet[2741]: W1031 13:35:03.927333 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927393 kubelet[2741]: E1031 13:35:03.927341 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927486 kubelet[2741]: E1031 13:35:03.927476 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927486 kubelet[2741]: W1031 13:35:03.927486 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927538 kubelet[2741]: E1031 13:35:03.927493 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927618 kubelet[2741]: E1031 13:35:03.927609 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927618 kubelet[2741]: W1031 13:35:03.927618 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927674 kubelet[2741]: E1031 13:35:03.927626 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927747 kubelet[2741]: E1031 13:35:03.927737 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927747 kubelet[2741]: W1031 13:35:03.927747 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927800 kubelet[2741]: E1031 13:35:03.927754 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.927880 kubelet[2741]: E1031 13:35:03.927870 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.927908 kubelet[2741]: W1031 13:35:03.927880 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.927908 kubelet[2741]: E1031 13:35:03.927887 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.928017 kubelet[2741]: E1031 13:35:03.928007 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.928046 kubelet[2741]: W1031 13:35:03.928018 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.928046 kubelet[2741]: E1031 13:35:03.928025 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.928147 kubelet[2741]: E1031 13:35:03.928138 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.928147 kubelet[2741]: W1031 13:35:03.928147 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.928204 kubelet[2741]: E1031 13:35:03.928154 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.928301 kubelet[2741]: E1031 13:35:03.928290 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.928301 kubelet[2741]: W1031 13:35:03.928300 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.928351 kubelet[2741]: E1031 13:35:03.928308 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.954825 kubelet[2741]: E1031 13:35:03.954745 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.954825 kubelet[2741]: W1031 13:35:03.954766 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.954825 kubelet[2741]: E1031 13:35:03.954782 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.955008 kubelet[2741]: E1031 13:35:03.954993 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.955008 kubelet[2741]: W1031 13:35:03.955001 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.955072 kubelet[2741]: E1031 13:35:03.955009 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.955219 kubelet[2741]: E1031 13:35:03.955197 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.955219 kubelet[2741]: W1031 13:35:03.955216 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.955461 kubelet[2741]: E1031 13:35:03.955225 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.955616 kubelet[2741]: E1031 13:35:03.955579 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.955677 kubelet[2741]: W1031 13:35:03.955664 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.955732 kubelet[2741]: E1031 13:35:03.955721 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.956039 kubelet[2741]: E1031 13:35:03.955939 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.956039 kubelet[2741]: W1031 13:35:03.955950 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.956039 kubelet[2741]: E1031 13:35:03.955959 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.956197 kubelet[2741]: E1031 13:35:03.956185 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.956284 kubelet[2741]: W1031 13:35:03.956253 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.956431 kubelet[2741]: E1031 13:35:03.956325 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.956528 kubelet[2741]: E1031 13:35:03.956517 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.956591 kubelet[2741]: W1031 13:35:03.956580 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.956641 kubelet[2741]: E1031 13:35:03.956631 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.956964 kubelet[2741]: E1031 13:35:03.956847 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.956964 kubelet[2741]: W1031 13:35:03.956860 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.956964 kubelet[2741]: E1031 13:35:03.956869 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.957107 kubelet[2741]: E1031 13:35:03.957096 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.957162 kubelet[2741]: W1031 13:35:03.957152 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.957570 kubelet[2741]: E1031 13:35:03.957219 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.957753 kubelet[2741]: E1031 13:35:03.957732 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.957753 kubelet[2741]: W1031 13:35:03.957747 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.957812 kubelet[2741]: E1031 13:35:03.957760 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.958359 kubelet[2741]: E1031 13:35:03.958339 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.958359 kubelet[2741]: W1031 13:35:03.958357 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.958419 kubelet[2741]: E1031 13:35:03.958371 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.958672 kubelet[2741]: E1031 13:35:03.958656 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.958773 kubelet[2741]: W1031 13:35:03.958759 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.958868 kubelet[2741]: E1031 13:35:03.958854 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.959253 kubelet[2741]: E1031 13:35:03.959225 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.959253 kubelet[2741]: W1031 13:35:03.959243 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.959335 kubelet[2741]: E1031 13:35:03.959255 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.959624 kubelet[2741]: E1031 13:35:03.959608 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.959624 kubelet[2741]: W1031 13:35:03.959621 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.959716 kubelet[2741]: E1031 13:35:03.959632 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.959978 kubelet[2741]: E1031 13:35:03.959962 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.959978 kubelet[2741]: W1031 13:35:03.959976 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.960180 kubelet[2741]: E1031 13:35:03.959987 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.960541 kubelet[2741]: E1031 13:35:03.960525 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.960541 kubelet[2741]: W1031 13:35:03.960540 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.960593 kubelet[2741]: E1031 13:35:03.960552 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.960884 kubelet[2741]: E1031 13:35:03.960869 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.960884 kubelet[2741]: W1031 13:35:03.960882 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.960929 kubelet[2741]: E1031 13:35:03.960893 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:03.961810 kubelet[2741]: E1031 13:35:03.961794 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:35:03.961810 kubelet[2741]: W1031 13:35:03.961808 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:35:03.961876 kubelet[2741]: E1031 13:35:03.961821 2741 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:35:04.287068 containerd[1582]: time="2025-10-31T13:35:04.287014222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:04.288168 containerd[1582]: time="2025-10-31T13:35:04.288137666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 31 13:35:04.288968 containerd[1582]: time="2025-10-31T13:35:04.288943343Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:04.290862 containerd[1582]: time="2025-10-31T13:35:04.290835978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:04.291615 containerd[1582]: time="2025-10-31T13:35:04.291587847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.254827813s" Oct 31 13:35:04.291653 containerd[1582]: time="2025-10-31T13:35:04.291620332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 31 13:35:04.295800 containerd[1582]: time="2025-10-31T13:35:04.295768935Z" level=info msg="CreateContainer within sandbox \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 13:35:04.304452 containerd[1582]: time="2025-10-31T13:35:04.304420113Z" level=info msg="Container 3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:35:04.305406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208671520.mount: Deactivated successfully. Oct 31 13:35:04.311330 containerd[1582]: time="2025-10-31T13:35:04.311286991Z" level=info msg="CreateContainer within sandbox \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\"" Oct 31 13:35:04.311804 containerd[1582]: time="2025-10-31T13:35:04.311728576Z" level=info msg="StartContainer for \"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\"" Oct 31 13:35:04.313038 containerd[1582]: time="2025-10-31T13:35:04.313014643Z" level=info msg="connecting to shim 3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912" address="unix:///run/containerd/s/04489eb71773d7180c9abb53b81d19f976139c83eff57589a5cfdf81e737c612" protocol=ttrpc version=3 Oct 31 13:35:04.333412 systemd[1]: Started cri-containerd-3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912.scope - libcontainer container 3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912. Oct 31 13:35:04.368193 containerd[1582]: time="2025-10-31T13:35:04.368152060Z" level=info msg="StartContainer for \"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\" returns successfully" Oct 31 13:35:04.378392 systemd[1]: cri-containerd-3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912.scope: Deactivated successfully. Oct 31 13:35:04.382159 containerd[1582]: time="2025-10-31T13:35:04.382103408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\" id:\"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\" pid:3496 exited_at:{seconds:1761917704 nanos:380615792}" Oct 31 13:35:04.385383 containerd[1582]: time="2025-10-31T13:35:04.385354721Z" level=info msg="received exit event container_id:\"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\" id:\"3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912\" pid:3496 exited_at:{seconds:1761917704 nanos:380615792}" Oct 31 13:35:04.402953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3daddf929f5348824a7a96babe5bbc8f8db29ae479de0c45c548ceb80e6c3912-rootfs.mount: Deactivated successfully. Oct 31 13:35:04.851617 kubelet[2741]: I1031 13:35:04.851587 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 13:35:04.853285 kubelet[2741]: E1031 13:35:04.851860 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:04.853361 containerd[1582]: time="2025-10-31T13:35:04.852932027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 13:35:04.853498 kubelet[2741]: E1031 13:35:04.853301 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:05.758474 kubelet[2741]: E1031 13:35:05.758388 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:06.980807 containerd[1582]: time="2025-10-31T13:35:06.980751150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:06.981462 containerd[1582]: time="2025-10-31T13:35:06.981432441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 31 13:35:06.982287 containerd[1582]: time="2025-10-31T13:35:06.982242270Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:06.986174 containerd[1582]: time="2025-10-31T13:35:06.986128032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:06.986945 containerd[1582]: time="2025-10-31T13:35:06.986899255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.133934103s" Oct 31 13:35:06.986945 containerd[1582]: time="2025-10-31T13:35:06.986935900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 31 13:35:06.991974 containerd[1582]: time="2025-10-31T13:35:06.991935372Z" level=info msg="CreateContainer within sandbox \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 13:35:07.003247 containerd[1582]: time="2025-10-31T13:35:07.003101422Z" level=info msg="Container 817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:35:07.011599 containerd[1582]: time="2025-10-31T13:35:07.011555555Z" level=info msg="CreateContainer within sandbox \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\"" Oct 31 13:35:07.012198 containerd[1582]: time="2025-10-31T13:35:07.012173635Z" level=info msg="StartContainer for \"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\"" Oct 31 13:35:07.013787 containerd[1582]: time="2025-10-31T13:35:07.013757800Z" level=info msg="connecting to shim 817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b" address="unix:///run/containerd/s/04489eb71773d7180c9abb53b81d19f976139c83eff57589a5cfdf81e737c612" protocol=ttrpc version=3 Oct 31 13:35:07.036443 systemd[1]: Started cri-containerd-817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b.scope - libcontainer container 817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b. Oct 31 13:35:07.069353 containerd[1582]: time="2025-10-31T13:35:07.069316903Z" level=info msg="StartContainer for \"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\" returns successfully" Oct 31 13:35:07.571809 systemd[1]: cri-containerd-817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b.scope: Deactivated successfully. Oct 31 13:35:07.572293 systemd[1]: cri-containerd-817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b.scope: Consumed 463ms CPU time, 181M memory peak, 3M read from disk, 165.9M written to disk. Oct 31 13:35:07.591765 containerd[1582]: time="2025-10-31T13:35:07.591729086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\" id:\"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\" pid:3558 exited_at:{seconds:1761917707 nanos:591406844}" Oct 31 13:35:07.591989 containerd[1582]: time="2025-10-31T13:35:07.591878265Z" level=info msg="received exit event container_id:\"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\" id:\"817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b\" pid:3558 exited_at:{seconds:1761917707 nanos:591406844}" Oct 31 13:35:07.609862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-817951da2f4b760959e2ebf31234e606a580a811c71ee4b0ce8bc1d80852e81b-rootfs.mount: Deactivated successfully. Oct 31 13:35:07.615681 kubelet[2741]: I1031 13:35:07.615657 2741 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 13:35:07.751022 systemd[1]: Created slice kubepods-burstable-poddda6e16a_d67b_4626_8e4a_5373e472e2fd.slice - libcontainer container kubepods-burstable-poddda6e16a_d67b_4626_8e4a_5373e472e2fd.slice. Oct 31 13:35:07.764657 systemd[1]: Created slice kubepods-besteffort-poda92d4cd4_ad4d_4e88_ae10_529f08ae8b8d.slice - libcontainer container kubepods-besteffort-poda92d4cd4_ad4d_4e88_ae10_529f08ae8b8d.slice. Oct 31 13:35:07.772734 systemd[1]: Created slice kubepods-besteffort-pod7e03f713_e3f7_401b_a028_09875138e499.slice - libcontainer container kubepods-besteffort-pod7e03f713_e3f7_401b_a028_09875138e499.slice. Oct 31 13:35:07.780387 systemd[1]: Created slice kubepods-besteffort-podeeb05b47_85b4_418f_a8c7_06c3a0435abf.slice - libcontainer container kubepods-besteffort-podeeb05b47_85b4_418f_a8c7_06c3a0435abf.slice. Oct 31 13:35:07.786084 kubelet[2741]: I1031 13:35:07.786052 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dda6e16a-d67b-4626-8e4a-5373e472e2fd-config-volume\") pod \"coredns-674b8bbfcf-wr62z\" (UID: \"dda6e16a-d67b-4626-8e4a-5373e472e2fd\") " pod="kube-system/coredns-674b8bbfcf-wr62z" Oct 31 13:35:07.786457 kubelet[2741]: I1031 13:35:07.786136 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t822\" (UniqueName: \"kubernetes.io/projected/7e03f713-e3f7-401b-a028-09875138e499-kube-api-access-4t822\") pod \"calico-kube-controllers-7bbbd67f67-xbmvj\" (UID: \"7e03f713-e3f7-401b-a028-09875138e499\") " pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" Oct 31 13:35:07.786457 kubelet[2741]: I1031 13:35:07.786165 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dfff0312-ad1e-473d-81aa-fca6e368f968-calico-apiserver-certs\") pod \"calico-apiserver-65687fc5c7-d8f78\" (UID: \"dfff0312-ad1e-473d-81aa-fca6e368f968\") " pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" Oct 31 13:35:07.786457 kubelet[2741]: I1031 13:35:07.786184 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpp7n\" (UniqueName: \"kubernetes.io/projected/a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d-kube-api-access-cpp7n\") pod \"goldmane-666569f655-dnn47\" (UID: \"a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d\") " pod="calico-system/goldmane-666569f655-dnn47" Oct 31 13:35:07.786457 kubelet[2741]: I1031 13:35:07.786220 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8dq\" (UniqueName: \"kubernetes.io/projected/dfff0312-ad1e-473d-81aa-fca6e368f968-kube-api-access-jl8dq\") pod \"calico-apiserver-65687fc5c7-d8f78\" (UID: \"dfff0312-ad1e-473d-81aa-fca6e368f968\") " pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" Oct 31 13:35:07.786457 kubelet[2741]: I1031 13:35:07.786289 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d-goldmane-key-pair\") pod \"goldmane-666569f655-dnn47\" (UID: \"a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d\") " pod="calico-system/goldmane-666569f655-dnn47" Oct 31 13:35:07.786962 kubelet[2741]: I1031 13:35:07.786885 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/133352ff-41f4-4716-9887-1e564e25f603-whisker-backend-key-pair\") pod \"whisker-79d674987b-77fpx\" (UID: \"133352ff-41f4-4716-9887-1e564e25f603\") " pod="calico-system/whisker-79d674987b-77fpx" Oct 31 13:35:07.786962 kubelet[2741]: I1031 13:35:07.786931 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e03f713-e3f7-401b-a028-09875138e499-tigera-ca-bundle\") pod \"calico-kube-controllers-7bbbd67f67-xbmvj\" (UID: \"7e03f713-e3f7-401b-a028-09875138e499\") " pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" Oct 31 13:35:07.787031 kubelet[2741]: I1031 13:35:07.786999 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d-config\") pod \"goldmane-666569f655-dnn47\" (UID: \"a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d\") " pod="calico-system/goldmane-666569f655-dnn47" Oct 31 13:35:07.787056 kubelet[2741]: I1031 13:35:07.787028 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d-goldmane-ca-bundle\") pod \"goldmane-666569f655-dnn47\" (UID: \"a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d\") " pod="calico-system/goldmane-666569f655-dnn47" Oct 31 13:35:07.787056 kubelet[2741]: I1031 13:35:07.787049 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133352ff-41f4-4716-9887-1e564e25f603-whisker-ca-bundle\") pod \"whisker-79d674987b-77fpx\" (UID: \"133352ff-41f4-4716-9887-1e564e25f603\") " pod="calico-system/whisker-79d674987b-77fpx" Oct 31 13:35:07.787418 kubelet[2741]: I1031 13:35:07.787065 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eeb05b47-85b4-418f-a8c7-06c3a0435abf-calico-apiserver-certs\") pod \"calico-apiserver-65687fc5c7-n2vln\" (UID: \"eeb05b47-85b4-418f-a8c7-06c3a0435abf\") " pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" Oct 31 13:35:07.787418 kubelet[2741]: I1031 13:35:07.787214 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbr7\" (UniqueName: \"kubernetes.io/projected/dda6e16a-d67b-4626-8e4a-5373e472e2fd-kube-api-access-lwbr7\") pod \"coredns-674b8bbfcf-wr62z\" (UID: \"dda6e16a-d67b-4626-8e4a-5373e472e2fd\") " pod="kube-system/coredns-674b8bbfcf-wr62z" Oct 31 13:35:07.787726 kubelet[2741]: I1031 13:35:07.787496 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmz9\" (UniqueName: \"kubernetes.io/projected/eeb05b47-85b4-418f-a8c7-06c3a0435abf-kube-api-access-zvmz9\") pod \"calico-apiserver-65687fc5c7-n2vln\" (UID: \"eeb05b47-85b4-418f-a8c7-06c3a0435abf\") " pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" Oct 31 13:35:07.788616 kubelet[2741]: I1031 13:35:07.788485 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbskv\" (UniqueName: \"kubernetes.io/projected/133352ff-41f4-4716-9887-1e564e25f603-kube-api-access-nbskv\") pod \"whisker-79d674987b-77fpx\" (UID: \"133352ff-41f4-4716-9887-1e564e25f603\") " pod="calico-system/whisker-79d674987b-77fpx" Oct 31 13:35:07.789598 systemd[1]: Created slice kubepods-besteffort-poddfff0312_ad1e_473d_81aa_fca6e368f968.slice - libcontainer container kubepods-besteffort-poddfff0312_ad1e_473d_81aa_fca6e368f968.slice. Oct 31 13:35:07.794054 systemd[1]: Created slice kubepods-besteffort-pod133352ff_41f4_4716_9887_1e564e25f603.slice - libcontainer container kubepods-besteffort-pod133352ff_41f4_4716_9887_1e564e25f603.slice. Oct 31 13:35:07.800803 systemd[1]: Created slice kubepods-burstable-pod94117cea_24dc_4751_95f1_2f28371123a9.slice - libcontainer container kubepods-burstable-pod94117cea_24dc_4751_95f1_2f28371123a9.slice. Oct 31 13:35:07.809687 systemd[1]: Created slice kubepods-besteffort-pod8ccd0d25_38f0_4382_b6b6_b6b5dfa955fd.slice - libcontainer container kubepods-besteffort-pod8ccd0d25_38f0_4382_b6b6_b6b5dfa955fd.slice. Oct 31 13:35:07.813060 containerd[1582]: time="2025-10-31T13:35:07.813023617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skbn9,Uid:8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:07.862686 kubelet[2741]: E1031 13:35:07.861999 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:07.864608 containerd[1582]: time="2025-10-31T13:35:07.864254640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 13:35:07.889433 kubelet[2741]: I1031 13:35:07.889392 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tl8\" (UniqueName: \"kubernetes.io/projected/94117cea-24dc-4751-95f1-2f28371123a9-kube-api-access-w6tl8\") pod \"coredns-674b8bbfcf-zxtlg\" (UID: \"94117cea-24dc-4751-95f1-2f28371123a9\") " pod="kube-system/coredns-674b8bbfcf-zxtlg" Oct 31 13:35:07.889711 kubelet[2741]: I1031 13:35:07.889537 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94117cea-24dc-4751-95f1-2f28371123a9-config-volume\") pod \"coredns-674b8bbfcf-zxtlg\" (UID: \"94117cea-24dc-4751-95f1-2f28371123a9\") " pod="kube-system/coredns-674b8bbfcf-zxtlg" Oct 31 13:35:07.926678 containerd[1582]: time="2025-10-31T13:35:07.926610662Z" level=error msg="Failed to destroy network for sandbox \"4745a6e7b2e77984fc09ed624805b39930d6e51d6a1176f3c1002449e016728d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:07.927663 containerd[1582]: time="2025-10-31T13:35:07.927610911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skbn9,Uid:8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4745a6e7b2e77984fc09ed624805b39930d6e51d6a1176f3c1002449e016728d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:07.930563 kubelet[2741]: E1031 13:35:07.930519 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4745a6e7b2e77984fc09ed624805b39930d6e51d6a1176f3c1002449e016728d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:07.930626 kubelet[2741]: E1031 13:35:07.930602 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4745a6e7b2e77984fc09ed624805b39930d6e51d6a1176f3c1002449e016728d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:07.930662 kubelet[2741]: E1031 13:35:07.930629 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4745a6e7b2e77984fc09ed624805b39930d6e51d6a1176f3c1002449e016728d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-skbn9" Oct 31 13:35:07.930719 kubelet[2741]: E1031 13:35:07.930694 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-skbn9_calico-system(8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-skbn9_calico-system(8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4745a6e7b2e77984fc09ed624805b39930d6e51d6a1176f3c1002449e016728d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:08.015247 systemd[1]: run-netns-cni\x2d0e6b747f\x2d244f\x2d7156\x2da414\x2d0912a1d1e3fe.mount: Deactivated successfully. Oct 31 13:35:08.061936 kubelet[2741]: E1031 13:35:08.061880 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:08.062678 containerd[1582]: time="2025-10-31T13:35:08.062621918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wr62z,Uid:dda6e16a-d67b-4626-8e4a-5373e472e2fd,Namespace:kube-system,Attempt:0,}" Oct 31 13:35:08.069268 containerd[1582]: time="2025-10-31T13:35:08.069216979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dnn47,Uid:a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:08.075979 containerd[1582]: time="2025-10-31T13:35:08.075941817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbbd67f67-xbmvj,Uid:7e03f713-e3f7-401b-a028-09875138e499,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:08.085110 containerd[1582]: time="2025-10-31T13:35:08.085021908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-n2vln,Uid:eeb05b47-85b4-418f-a8c7-06c3a0435abf,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:35:08.094248 containerd[1582]: time="2025-10-31T13:35:08.094156966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-d8f78,Uid:dfff0312-ad1e-473d-81aa-fca6e368f968,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:35:08.104824 kubelet[2741]: E1031 13:35:08.104783 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:08.105857 containerd[1582]: time="2025-10-31T13:35:08.105592511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxtlg,Uid:94117cea-24dc-4751-95f1-2f28371123a9,Namespace:kube-system,Attempt:0,}" Oct 31 13:35:08.106207 containerd[1582]: time="2025-10-31T13:35:08.106182424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d674987b-77fpx,Uid:133352ff-41f4-4716-9887-1e564e25f603,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:08.147654 containerd[1582]: time="2025-10-31T13:35:08.147531215Z" level=error msg="Failed to destroy network for sandbox \"e0e9d80d7d538f8b748878fe5319c4a9261b5f0305beb766a82afe58d2a4b357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.149014 containerd[1582]: time="2025-10-31T13:35:08.148968034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wr62z,Uid:dda6e16a-d67b-4626-8e4a-5373e472e2fd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e9d80d7d538f8b748878fe5319c4a9261b5f0305beb766a82afe58d2a4b357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.149256 kubelet[2741]: E1031 13:35:08.149214 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e9d80d7d538f8b748878fe5319c4a9261b5f0305beb766a82afe58d2a4b357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.149401 kubelet[2741]: E1031 13:35:08.149377 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e9d80d7d538f8b748878fe5319c4a9261b5f0305beb766a82afe58d2a4b357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wr62z" Oct 31 13:35:08.149447 kubelet[2741]: E1031 13:35:08.149405 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e9d80d7d538f8b748878fe5319c4a9261b5f0305beb766a82afe58d2a4b357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wr62z" Oct 31 13:35:08.149495 kubelet[2741]: E1031 13:35:08.149464 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wr62z_kube-system(dda6e16a-d67b-4626-8e4a-5373e472e2fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wr62z_kube-system(dda6e16a-d67b-4626-8e4a-5373e472e2fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0e9d80d7d538f8b748878fe5319c4a9261b5f0305beb766a82afe58d2a4b357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wr62z" podUID="dda6e16a-d67b-4626-8e4a-5373e472e2fd" Oct 31 13:35:08.170536 containerd[1582]: time="2025-10-31T13:35:08.170477514Z" level=error msg="Failed to destroy network for sandbox \"8efe1ba2ce17b7ecd037abb6e739bd34fbc058ed42e6c697baf19452c30f0dca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.172285 containerd[1582]: time="2025-10-31T13:35:08.172200728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbbd67f67-xbmvj,Uid:7e03f713-e3f7-401b-a028-09875138e499,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efe1ba2ce17b7ecd037abb6e739bd34fbc058ed42e6c697baf19452c30f0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.173003 kubelet[2741]: E1031 13:35:08.172949 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efe1ba2ce17b7ecd037abb6e739bd34fbc058ed42e6c697baf19452c30f0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.173093 kubelet[2741]: E1031 13:35:08.173018 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efe1ba2ce17b7ecd037abb6e739bd34fbc058ed42e6c697baf19452c30f0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" Oct 31 13:35:08.173093 kubelet[2741]: E1031 13:35:08.173048 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efe1ba2ce17b7ecd037abb6e739bd34fbc058ed42e6c697baf19452c30f0dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" Oct 31 13:35:08.173153 kubelet[2741]: E1031 13:35:08.173093 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bbbd67f67-xbmvj_calico-system(7e03f713-e3f7-401b-a028-09875138e499)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bbbd67f67-xbmvj_calico-system(7e03f713-e3f7-401b-a028-09875138e499)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8efe1ba2ce17b7ecd037abb6e739bd34fbc058ed42e6c697baf19452c30f0dca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" podUID="7e03f713-e3f7-401b-a028-09875138e499" Oct 31 13:35:08.177304 containerd[1582]: time="2025-10-31T13:35:08.177247877Z" level=error msg="Failed to destroy network for sandbox \"6457df1627655980a0a3eca67541de55239ebe2fd8bbd2b4bab66f45cfee1d82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.178251 containerd[1582]: time="2025-10-31T13:35:08.178217518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dnn47,Uid:a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6457df1627655980a0a3eca67541de55239ebe2fd8bbd2b4bab66f45cfee1d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.178624 kubelet[2741]: E1031 13:35:08.178582 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6457df1627655980a0a3eca67541de55239ebe2fd8bbd2b4bab66f45cfee1d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.178683 kubelet[2741]: E1031 13:35:08.178642 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6457df1627655980a0a3eca67541de55239ebe2fd8bbd2b4bab66f45cfee1d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dnn47" Oct 31 13:35:08.178683 kubelet[2741]: E1031 13:35:08.178661 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6457df1627655980a0a3eca67541de55239ebe2fd8bbd2b4bab66f45cfee1d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dnn47" Oct 31 13:35:08.178728 kubelet[2741]: E1031 13:35:08.178702 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dnn47_calico-system(a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dnn47_calico-system(a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6457df1627655980a0a3eca67541de55239ebe2fd8bbd2b4bab66f45cfee1d82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dnn47" podUID="a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d" Oct 31 13:35:08.191691 containerd[1582]: time="2025-10-31T13:35:08.191629549Z" level=error msg="Failed to destroy network for sandbox \"ecdac4ac76ae56c8190ddf63e6b554a1471b117ba94c6d819839884e49b14d17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.194280 containerd[1582]: time="2025-10-31T13:35:08.192879384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-n2vln,Uid:eeb05b47-85b4-418f-a8c7-06c3a0435abf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdac4ac76ae56c8190ddf63e6b554a1471b117ba94c6d819839884e49b14d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.194407 kubelet[2741]: E1031 13:35:08.193135 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdac4ac76ae56c8190ddf63e6b554a1471b117ba94c6d819839884e49b14d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.194407 kubelet[2741]: E1031 13:35:08.193190 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdac4ac76ae56c8190ddf63e6b554a1471b117ba94c6d819839884e49b14d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" Oct 31 13:35:08.194407 kubelet[2741]: E1031 13:35:08.193210 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecdac4ac76ae56c8190ddf63e6b554a1471b117ba94c6d819839884e49b14d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" Oct 31 13:35:08.194500 kubelet[2741]: E1031 13:35:08.193252 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65687fc5c7-n2vln_calico-apiserver(eeb05b47-85b4-418f-a8c7-06c3a0435abf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65687fc5c7-n2vln_calico-apiserver(eeb05b47-85b4-418f-a8c7-06c3a0435abf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecdac4ac76ae56c8190ddf63e6b554a1471b117ba94c6d819839884e49b14d17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" podUID="eeb05b47-85b4-418f-a8c7-06c3a0435abf" Oct 31 13:35:08.204585 containerd[1582]: time="2025-10-31T13:35:08.204529596Z" level=error msg="Failed to destroy network for sandbox \"2b56ee0c100fe4093fb91eef0c43a0568ce1e02f085c987cab0b013634c2e9d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.205406 containerd[1582]: time="2025-10-31T13:35:08.205373181Z" level=error msg="Failed to destroy network for sandbox \"b533c7ae6330eb5a4f57ef9086f5bdbd8d831b6bf165a601814a606924aff5c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.205574 containerd[1582]: time="2025-10-31T13:35:08.205543522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-d8f78,Uid:dfff0312-ad1e-473d-81aa-fca6e368f968,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b56ee0c100fe4093fb91eef0c43a0568ce1e02f085c987cab0b013634c2e9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.205775 kubelet[2741]: E1031 13:35:08.205735 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b56ee0c100fe4093fb91eef0c43a0568ce1e02f085c987cab0b013634c2e9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.205843 kubelet[2741]: E1031 13:35:08.205797 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b56ee0c100fe4093fb91eef0c43a0568ce1e02f085c987cab0b013634c2e9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" Oct 31 13:35:08.205874 kubelet[2741]: E1031 13:35:08.205848 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b56ee0c100fe4093fb91eef0c43a0568ce1e02f085c987cab0b013634c2e9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" Oct 31 13:35:08.205925 kubelet[2741]: E1031 13:35:08.205899 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65687fc5c7-d8f78_calico-apiserver(dfff0312-ad1e-473d-81aa-fca6e368f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65687fc5c7-d8f78_calico-apiserver(dfff0312-ad1e-473d-81aa-fca6e368f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b56ee0c100fe4093fb91eef0c43a0568ce1e02f085c987cab0b013634c2e9d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" podUID="dfff0312-ad1e-473d-81aa-fca6e368f968" Oct 31 13:35:08.206385 containerd[1582]: time="2025-10-31T13:35:08.206350623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxtlg,Uid:94117cea-24dc-4751-95f1-2f28371123a9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b533c7ae6330eb5a4f57ef9086f5bdbd8d831b6bf165a601814a606924aff5c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.206529 kubelet[2741]: E1031 13:35:08.206503 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b533c7ae6330eb5a4f57ef9086f5bdbd8d831b6bf165a601814a606924aff5c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.206567 kubelet[2741]: E1031 13:35:08.206541 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b533c7ae6330eb5a4f57ef9086f5bdbd8d831b6bf165a601814a606924aff5c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zxtlg" Oct 31 13:35:08.206567 kubelet[2741]: E1031 13:35:08.206561 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b533c7ae6330eb5a4f57ef9086f5bdbd8d831b6bf165a601814a606924aff5c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zxtlg" Oct 31 13:35:08.206631 kubelet[2741]: E1031 13:35:08.206597 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zxtlg_kube-system(94117cea-24dc-4751-95f1-2f28371123a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zxtlg_kube-system(94117cea-24dc-4751-95f1-2f28371123a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b533c7ae6330eb5a4f57ef9086f5bdbd8d831b6bf165a601814a606924aff5c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zxtlg" podUID="94117cea-24dc-4751-95f1-2f28371123a9" Oct 31 13:35:08.209647 containerd[1582]: time="2025-10-31T13:35:08.209610589Z" level=error msg="Failed to destroy network for sandbox \"038dfb3ab0540a7744c26e3d88e60a9fc03a6f3c132dc50c302f9679b5e92148\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.211136 containerd[1582]: time="2025-10-31T13:35:08.211101815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d674987b-77fpx,Uid:133352ff-41f4-4716-9887-1e564e25f603,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"038dfb3ab0540a7744c26e3d88e60a9fc03a6f3c132dc50c302f9679b5e92148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.211368 kubelet[2741]: E1031 13:35:08.211339 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"038dfb3ab0540a7744c26e3d88e60a9fc03a6f3c132dc50c302f9679b5e92148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:35:08.211417 kubelet[2741]: E1031 13:35:08.211384 2741 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"038dfb3ab0540a7744c26e3d88e60a9fc03a6f3c132dc50c302f9679b5e92148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79d674987b-77fpx" Oct 31 13:35:08.211446 kubelet[2741]: E1031 13:35:08.211425 2741 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"038dfb3ab0540a7744c26e3d88e60a9fc03a6f3c132dc50c302f9679b5e92148\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79d674987b-77fpx" Oct 31 13:35:08.211490 kubelet[2741]: E1031 13:35:08.211464 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79d674987b-77fpx_calico-system(133352ff-41f4-4716-9887-1e564e25f603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79d674987b-77fpx_calico-system(133352ff-41f4-4716-9887-1e564e25f603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"038dfb3ab0540a7744c26e3d88e60a9fc03a6f3c132dc50c302f9679b5e92148\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79d674987b-77fpx" podUID="133352ff-41f4-4716-9887-1e564e25f603" Oct 31 13:35:09.008190 systemd[1]: run-netns-cni\x2de4eab0f5\x2d566b\x2d18b4\x2dbc2f\x2d4aa81c322e59.mount: Deactivated successfully. Oct 31 13:35:09.008303 systemd[1]: run-netns-cni\x2d6cb9ff6c\x2de36d\x2d34b8\x2d8736\x2de4e72da0f040.mount: Deactivated successfully. Oct 31 13:35:09.008350 systemd[1]: run-netns-cni\x2da729d9fa\x2d301b\x2db7e3\x2d8989\x2d538647a9553b.mount: Deactivated successfully. Oct 31 13:35:09.008392 systemd[1]: run-netns-cni\x2d9e483e34\x2dcab1\x2d5342\x2d496f\x2d7f93d8de2ef6.mount: Deactivated successfully. Oct 31 13:35:11.653831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033646967.mount: Deactivated successfully. Oct 31 13:35:11.734423 containerd[1582]: time="2025-10-31T13:35:11.734355931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:11.745990 containerd[1582]: time="2025-10-31T13:35:11.735056010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 31 13:35:11.745990 containerd[1582]: time="2025-10-31T13:35:11.735691601Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:11.746101 containerd[1582]: time="2025-10-31T13:35:11.738003980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.873693373s" Oct 31 13:35:11.746101 containerd[1582]: time="2025-10-31T13:35:11.746091607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 31 13:35:11.746556 containerd[1582]: time="2025-10-31T13:35:11.746531776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:35:11.776190 containerd[1582]: time="2025-10-31T13:35:11.776135936Z" level=info msg="CreateContainer within sandbox \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 13:35:11.786919 containerd[1582]: time="2025-10-31T13:35:11.786854538Z" level=info msg="Container 443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:35:11.802101 containerd[1582]: time="2025-10-31T13:35:11.802048121Z" level=info msg="CreateContainer within sandbox \"eff4c7fa3a46e8ffcb92cb95837445db7aa65312ec647c46916a5de66754d4a6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65\"" Oct 31 13:35:11.802697 containerd[1582]: time="2025-10-31T13:35:11.802662070Z" level=info msg="StartContainer for \"443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65\"" Oct 31 13:35:11.804739 containerd[1582]: time="2025-10-31T13:35:11.804694058Z" level=info msg="connecting to shim 443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65" address="unix:///run/containerd/s/04489eb71773d7180c9abb53b81d19f976139c83eff57589a5cfdf81e737c612" protocol=ttrpc version=3 Oct 31 13:35:11.827484 systemd[1]: Started cri-containerd-443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65.scope - libcontainer container 443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65. Oct 31 13:35:11.863992 containerd[1582]: time="2025-10-31T13:35:11.863943061Z" level=info msg="StartContainer for \"443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65\" returns successfully" Oct 31 13:35:11.878621 kubelet[2741]: E1031 13:35:11.878582 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:11.902980 kubelet[2741]: I1031 13:35:11.902681 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tfb77" podStartSLOduration=0.609786657 podStartE2EDuration="11.902654002s" podCreationTimestamp="2025-10-31 13:35:00 +0000 UTC" firstStartedPulling="2025-10-31 13:35:00.458330154 +0000 UTC m=+22.790879321" lastFinishedPulling="2025-10-31 13:35:11.751197499 +0000 UTC m=+34.083746666" observedRunningTime="2025-10-31 13:35:11.901639328 +0000 UTC m=+34.234188495" watchObservedRunningTime="2025-10-31 13:35:11.902654002 +0000 UTC m=+34.235203169" Oct 31 13:35:11.986068 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 13:35:11.986170 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 13:35:12.229055 kubelet[2741]: I1031 13:35:12.228782 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/133352ff-41f4-4716-9887-1e564e25f603-whisker-backend-key-pair\") pod \"133352ff-41f4-4716-9887-1e564e25f603\" (UID: \"133352ff-41f4-4716-9887-1e564e25f603\") " Oct 31 13:35:12.229055 kubelet[2741]: I1031 13:35:12.228835 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbskv\" (UniqueName: \"kubernetes.io/projected/133352ff-41f4-4716-9887-1e564e25f603-kube-api-access-nbskv\") pod \"133352ff-41f4-4716-9887-1e564e25f603\" (UID: \"133352ff-41f4-4716-9887-1e564e25f603\") " Oct 31 13:35:12.229055 kubelet[2741]: I1031 13:35:12.228878 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133352ff-41f4-4716-9887-1e564e25f603-whisker-ca-bundle\") pod \"133352ff-41f4-4716-9887-1e564e25f603\" (UID: \"133352ff-41f4-4716-9887-1e564e25f603\") " Oct 31 13:35:12.243421 kubelet[2741]: I1031 13:35:12.243323 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133352ff-41f4-4716-9887-1e564e25f603-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "133352ff-41f4-4716-9887-1e564e25f603" (UID: "133352ff-41f4-4716-9887-1e564e25f603"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 13:35:12.243883 kubelet[2741]: I1031 13:35:12.243428 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133352ff-41f4-4716-9887-1e564e25f603-kube-api-access-nbskv" (OuterVolumeSpecName: "kube-api-access-nbskv") pod "133352ff-41f4-4716-9887-1e564e25f603" (UID: "133352ff-41f4-4716-9887-1e564e25f603"). InnerVolumeSpecName "kube-api-access-nbskv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 13:35:12.243883 kubelet[2741]: I1031 13:35:12.243845 2741 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/133352ff-41f4-4716-9887-1e564e25f603-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "133352ff-41f4-4716-9887-1e564e25f603" (UID: "133352ff-41f4-4716-9887-1e564e25f603"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 13:35:12.329911 kubelet[2741]: I1031 13:35:12.329858 2741 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/133352ff-41f4-4716-9887-1e564e25f603-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 13:35:12.329911 kubelet[2741]: I1031 13:35:12.329894 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nbskv\" (UniqueName: \"kubernetes.io/projected/133352ff-41f4-4716-9887-1e564e25f603-kube-api-access-nbskv\") on node \"localhost\" DevicePath \"\"" Oct 31 13:35:12.329911 kubelet[2741]: I1031 13:35:12.329902 2741 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133352ff-41f4-4716-9887-1e564e25f603-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 13:35:12.654715 systemd[1]: var-lib-kubelet-pods-133352ff\x2d41f4\x2d4716\x2d9887\x2d1e564e25f603-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbskv.mount: Deactivated successfully. Oct 31 13:35:12.654811 systemd[1]: var-lib-kubelet-pods-133352ff\x2d41f4\x2d4716\x2d9887\x2d1e564e25f603-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 13:35:12.877566 kubelet[2741]: I1031 13:35:12.877481 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 13:35:12.878445 kubelet[2741]: E1031 13:35:12.878417 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:12.883373 systemd[1]: Removed slice kubepods-besteffort-pod133352ff_41f4_4716_9887_1e564e25f603.slice - libcontainer container kubepods-besteffort-pod133352ff_41f4_4716_9887_1e564e25f603.slice. Oct 31 13:35:12.994872 systemd[1]: Created slice kubepods-besteffort-pod0d0fe36f_47b0_435c_890e_97fd7f68acbd.slice - libcontainer container kubepods-besteffort-pod0d0fe36f_47b0_435c_890e_97fd7f68acbd.slice. Oct 31 13:35:13.034065 kubelet[2741]: I1031 13:35:13.033982 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d0fe36f-47b0-435c-890e-97fd7f68acbd-whisker-ca-bundle\") pod \"whisker-6875d8f7b6-5d9sp\" (UID: \"0d0fe36f-47b0-435c-890e-97fd7f68acbd\") " pod="calico-system/whisker-6875d8f7b6-5d9sp" Oct 31 13:35:13.034065 kubelet[2741]: I1031 13:35:13.034033 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvwgm\" (UniqueName: \"kubernetes.io/projected/0d0fe36f-47b0-435c-890e-97fd7f68acbd-kube-api-access-qvwgm\") pod \"whisker-6875d8f7b6-5d9sp\" (UID: \"0d0fe36f-47b0-435c-890e-97fd7f68acbd\") " pod="calico-system/whisker-6875d8f7b6-5d9sp" Oct 31 13:35:13.034565 kubelet[2741]: I1031 13:35:13.034125 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d0fe36f-47b0-435c-890e-97fd7f68acbd-whisker-backend-key-pair\") pod \"whisker-6875d8f7b6-5d9sp\" (UID: \"0d0fe36f-47b0-435c-890e-97fd7f68acbd\") " pod="calico-system/whisker-6875d8f7b6-5d9sp" Oct 31 13:35:13.299986 containerd[1582]: time="2025-10-31T13:35:13.299868038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6875d8f7b6-5d9sp,Uid:0d0fe36f-47b0-435c-890e-97fd7f68acbd,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:13.593952 systemd-networkd[1488]: cali6bcb874cf01: Link UP Oct 31 13:35:13.596707 systemd-networkd[1488]: cali6bcb874cf01: Gained carrier Oct 31 13:35:13.614066 containerd[1582]: 2025-10-31 13:35:13.409 [INFO][4024] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 13:35:13.614066 containerd[1582]: 2025-10-31 13:35:13.463 [INFO][4024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0 whisker-6875d8f7b6- calico-system 0d0fe36f-47b0-435c-890e-97fd7f68acbd 915 0 2025-10-31 13:35:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6875d8f7b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6875d8f7b6-5d9sp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6bcb874cf01 [] [] }} ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-" Oct 31 13:35:13.614066 containerd[1582]: 2025-10-31 13:35:13.463 [INFO][4024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.614066 containerd[1582]: 2025-10-31 13:35:13.530 [INFO][4043] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" HandleID="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Workload="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.530 [INFO][4043] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" HandleID="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Workload="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6875d8f7b6-5d9sp", "timestamp":"2025-10-31 13:35:13.530696891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.530 [INFO][4043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.530 [INFO][4043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.531 [INFO][4043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.544 [INFO][4043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" host="localhost" Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.552 [INFO][4043] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.557 [INFO][4043] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.561 [INFO][4043] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.563 [INFO][4043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:13.614305 containerd[1582]: 2025-10-31 13:35:13.564 [INFO][4043] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" host="localhost" Oct 31 13:35:13.614537 containerd[1582]: 2025-10-31 13:35:13.565 [INFO][4043] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8 Oct 31 13:35:13.614537 containerd[1582]: 2025-10-31 13:35:13.571 [INFO][4043] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" host="localhost" Oct 31 13:35:13.614537 containerd[1582]: 2025-10-31 13:35:13.578 [INFO][4043] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" host="localhost" Oct 31 13:35:13.614537 containerd[1582]: 2025-10-31 13:35:13.578 [INFO][4043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" host="localhost" Oct 31 13:35:13.614537 containerd[1582]: 2025-10-31 13:35:13.578 [INFO][4043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:13.614537 containerd[1582]: 2025-10-31 13:35:13.580 [INFO][4043] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" HandleID="k8s-pod-network.f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Workload="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.614652 containerd[1582]: 2025-10-31 13:35:13.585 [INFO][4024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0", GenerateName:"whisker-6875d8f7b6-", Namespace:"calico-system", SelfLink:"", UID:"0d0fe36f-47b0-435c-890e-97fd7f68acbd", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 35, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6875d8f7b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6875d8f7b6-5d9sp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6bcb874cf01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:13.614652 containerd[1582]: 2025-10-31 13:35:13.586 [INFO][4024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.614724 containerd[1582]: 2025-10-31 13:35:13.586 [INFO][4024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bcb874cf01 ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.614724 containerd[1582]: 2025-10-31 13:35:13.597 [INFO][4024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.614766 containerd[1582]: 2025-10-31 13:35:13.597 [INFO][4024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0", GenerateName:"whisker-6875d8f7b6-", Namespace:"calico-system", SelfLink:"", UID:"0d0fe36f-47b0-435c-890e-97fd7f68acbd", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 35, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6875d8f7b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8", Pod:"whisker-6875d8f7b6-5d9sp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6bcb874cf01", MAC:"92:61:ca:87:25:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:13.614811 containerd[1582]: 2025-10-31 13:35:13.611 [INFO][4024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" Namespace="calico-system" Pod="whisker-6875d8f7b6-5d9sp" WorkloadEndpoint="localhost-k8s-whisker--6875d8f7b6--5d9sp-eth0" Oct 31 13:35:13.656464 containerd[1582]: time="2025-10-31T13:35:13.656415261Z" level=info msg="connecting to shim f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8" address="unix:///run/containerd/s/250ffd09d3590dca12ce660a9f6a4a1d6e437aee22bdb8d6b29eddc52558cc94" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:13.684516 systemd[1]: Started cri-containerd-f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8.scope - libcontainer container f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8. Oct 31 13:35:13.696324 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:13.722809 containerd[1582]: time="2025-10-31T13:35:13.722747151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6875d8f7b6-5d9sp,Uid:0d0fe36f-47b0-435c-890e-97fd7f68acbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"f281260dbc3c3f8637ddddac7be11dd49019ed4bb81a66474f97b2ead646def8\"" Oct 31 13:35:13.728483 containerd[1582]: time="2025-10-31T13:35:13.728456471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 13:35:13.761548 kubelet[2741]: I1031 13:35:13.761509 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133352ff-41f4-4716-9887-1e564e25f603" path="/var/lib/kubelet/pods/133352ff-41f4-4716-9887-1e564e25f603/volumes" Oct 31 13:35:13.939118 containerd[1582]: time="2025-10-31T13:35:13.938954908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:13.940666 containerd[1582]: time="2025-10-31T13:35:13.940595161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 13:35:13.940867 containerd[1582]: time="2025-10-31T13:35:13.940604922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 13:35:13.942785 kubelet[2741]: E1031 13:35:13.942705 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:35:13.944094 kubelet[2741]: E1031 13:35:13.944057 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:35:13.954199 kubelet[2741]: E1031 13:35:13.954110 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2bf68f51896144188649829c473f9a2e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6875d8f7b6-5d9sp_calico-system(0d0fe36f-47b0-435c-890e-97fd7f68acbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:13.956615 containerd[1582]: time="2025-10-31T13:35:13.956553758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 13:35:14.165651 containerd[1582]: time="2025-10-31T13:35:14.165479344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:14.166482 containerd[1582]: time="2025-10-31T13:35:14.166374036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 13:35:14.166482 containerd[1582]: time="2025-10-31T13:35:14.166405239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 13:35:14.166653 kubelet[2741]: E1031 13:35:14.166611 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:35:14.166945 kubelet[2741]: E1031 13:35:14.166663 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:35:14.166976 kubelet[2741]: E1031 13:35:14.166821 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6875d8f7b6-5d9sp_calico-system(0d0fe36f-47b0-435c-890e-97fd7f68acbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:14.168043 kubelet[2741]: E1031 13:35:14.168000 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6875d8f7b6-5d9sp" podUID="0d0fe36f-47b0-435c-890e-97fd7f68acbd" Oct 31 13:35:14.759467 systemd-networkd[1488]: cali6bcb874cf01: Gained IPv6LL Oct 31 13:35:14.886942 kubelet[2741]: E1031 13:35:14.886811 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6875d8f7b6-5d9sp" podUID="0d0fe36f-47b0-435c-890e-97fd7f68acbd" Oct 31 13:35:15.555860 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:51892.service - OpenSSH per-connection server daemon (10.0.0.1:51892). Oct 31 13:35:15.620454 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 51892 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:15.621228 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:15.624935 systemd-logind[1552]: New session 8 of user core. Oct 31 13:35:15.634407 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 13:35:15.759932 sshd[4165]: Connection closed by 10.0.0.1 port 51892 Oct 31 13:35:15.760735 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:15.765884 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:51892.service: Deactivated successfully. Oct 31 13:35:15.767739 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 13:35:15.769212 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Oct 31 13:35:15.771138 systemd-logind[1552]: Removed session 8. Oct 31 13:35:17.465652 kubelet[2741]: E1031 13:35:17.465604 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:17.855894 systemd-networkd[1488]: vxlan.calico: Link UP Oct 31 13:35:17.856353 systemd-networkd[1488]: vxlan.calico: Gained carrier Oct 31 13:35:17.894004 kubelet[2741]: E1031 13:35:17.893956 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:19.559816 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL Oct 31 13:35:19.761467 kubelet[2741]: E1031 13:35:19.761295 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:19.762778 containerd[1582]: time="2025-10-31T13:35:19.761871371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dnn47,Uid:a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:19.762778 containerd[1582]: time="2025-10-31T13:35:19.761990622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxtlg,Uid:94117cea-24dc-4751-95f1-2f28371123a9,Namespace:kube-system,Attempt:0,}" Oct 31 13:35:19.873379 systemd-networkd[1488]: cali580196e5a6d: Link UP Oct 31 13:35:19.874069 systemd-networkd[1488]: cali580196e5a6d: Gained carrier Oct 31 13:35:19.890785 containerd[1582]: 2025-10-31 13:35:19.804 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0 coredns-674b8bbfcf- kube-system 94117cea-24dc-4751-95f1-2f28371123a9 852 0 2025-10-31 13:34:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zxtlg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali580196e5a6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-" Oct 31 13:35:19.890785 containerd[1582]: 2025-10-31 13:35:19.804 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.890785 containerd[1582]: 2025-10-31 13:35:19.830 [INFO][4372] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" HandleID="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Workload="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.830 [INFO][4372] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" HandleID="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Workload="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a0e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zxtlg", "timestamp":"2025-10-31 13:35:19.830803481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.831 [INFO][4372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.831 [INFO][4372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.831 [INFO][4372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.842 [INFO][4372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" host="localhost" Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.846 [INFO][4372] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.851 [INFO][4372] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.853 [INFO][4372] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.855 [INFO][4372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:19.891079 containerd[1582]: 2025-10-31 13:35:19.855 [INFO][4372] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" host="localhost" Oct 31 13:35:19.891510 containerd[1582]: 2025-10-31 13:35:19.857 [INFO][4372] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3 Oct 31 13:35:19.891510 containerd[1582]: 2025-10-31 13:35:19.861 [INFO][4372] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" host="localhost" Oct 31 13:35:19.891510 containerd[1582]: 2025-10-31 13:35:19.866 [INFO][4372] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" host="localhost" Oct 31 13:35:19.891510 containerd[1582]: 2025-10-31 13:35:19.866 [INFO][4372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" host="localhost" Oct 31 13:35:19.891510 containerd[1582]: 2025-10-31 13:35:19.867 [INFO][4372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:19.891510 containerd[1582]: 2025-10-31 13:35:19.867 [INFO][4372] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" HandleID="k8s-pod-network.f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Workload="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.891666 containerd[1582]: 2025-10-31 13:35:19.870 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"94117cea-24dc-4751-95f1-2f28371123a9", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zxtlg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580196e5a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:19.891751 containerd[1582]: 2025-10-31 13:35:19.870 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.891751 containerd[1582]: 2025-10-31 13:35:19.870 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580196e5a6d ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.891751 containerd[1582]: 2025-10-31 13:35:19.874 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.891831 containerd[1582]: 2025-10-31 13:35:19.875 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"94117cea-24dc-4751-95f1-2f28371123a9", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3", Pod:"coredns-674b8bbfcf-zxtlg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580196e5a6d", MAC:"32:44:e4:37:52:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:19.891831 containerd[1582]: 2025-10-31 13:35:19.887 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxtlg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxtlg-eth0" Oct 31 13:35:19.911310 containerd[1582]: time="2025-10-31T13:35:19.910765408Z" level=info msg="connecting to shim f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3" address="unix:///run/containerd/s/6a02e4167fc849ea26700b14091294746dfb5c946dbd64f4608cac0fa215f819" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:19.947489 systemd[1]: Started cri-containerd-f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3.scope - libcontainer container f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3. Oct 31 13:35:19.964956 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:19.982581 systemd-networkd[1488]: cali6f2e7947caf: Link UP Oct 31 13:35:19.982772 systemd-networkd[1488]: cali6f2e7947caf: Gained carrier Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.804 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--dnn47-eth0 goldmane-666569f655- calico-system a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d 848 0 2025-10-31 13:34:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-dnn47 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6f2e7947caf [] [] }} ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.804 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.833 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" HandleID="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Workload="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.833 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" HandleID="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Workload="localhost-k8s-goldmane--666569f655--dnn47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-dnn47", "timestamp":"2025-10-31 13:35:19.833402311 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.833 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.867 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.867 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.942 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.949 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.958 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.960 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.962 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.962 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.966 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.970 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.976 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.976 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" host="localhost" Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.976 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:19.999740 containerd[1582]: 2025-10-31 13:35:19.976 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" HandleID="k8s-pod-network.9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Workload="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:20.000416 containerd[1582]: 2025-10-31 13:35:19.980 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dnn47-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-dnn47", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f2e7947caf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:20.000416 containerd[1582]: 2025-10-31 13:35:19.980 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:20.000416 containerd[1582]: 2025-10-31 13:35:19.980 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f2e7947caf ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:20.000416 containerd[1582]: 2025-10-31 13:35:19.982 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:20.000416 containerd[1582]: 2025-10-31 13:35:19.983 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dnn47-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e", Pod:"goldmane-666569f655-dnn47", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f2e7947caf", MAC:"7e:23:32:d6:e6:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:20.000416 containerd[1582]: 2025-10-31 13:35:19.995 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" Namespace="calico-system" Pod="goldmane-666569f655-dnn47" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dnn47-eth0" Oct 31 13:35:20.009807 containerd[1582]: time="2025-10-31T13:35:20.009763402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxtlg,Uid:94117cea-24dc-4751-95f1-2f28371123a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3\"" Oct 31 13:35:20.011906 kubelet[2741]: E1031 13:35:20.011866 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:20.023864 containerd[1582]: time="2025-10-31T13:35:20.023325814Z" level=info msg="CreateContainer within sandbox \"f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 13:35:20.027930 containerd[1582]: time="2025-10-31T13:35:20.027887409Z" level=info msg="connecting to shim 9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e" address="unix:///run/containerd/s/749b2ad9eafcdfe02fa5c5c3192f24d0b4e3a0506d9c6785e99dd9fdfd5cd662" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:20.032462 containerd[1582]: time="2025-10-31T13:35:20.032420520Z" level=info msg="Container 13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:35:20.038228 containerd[1582]: time="2025-10-31T13:35:20.038185019Z" level=info msg="CreateContainer within sandbox \"f43c770a404bf49754f91b4576870bc006d9aabde4997c2b527613bd1ce7cbc3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f\"" Oct 31 13:35:20.038934 containerd[1582]: time="2025-10-31T13:35:20.038893600Z" level=info msg="StartContainer for \"13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f\"" Oct 31 13:35:20.040348 containerd[1582]: time="2025-10-31T13:35:20.040317283Z" level=info msg="connecting to shim 13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f" address="unix:///run/containerd/s/6a02e4167fc849ea26700b14091294746dfb5c946dbd64f4608cac0fa215f819" protocol=ttrpc version=3 Oct 31 13:35:20.056482 systemd[1]: Started cri-containerd-9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e.scope - libcontainer container 9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e. Oct 31 13:35:20.059808 systemd[1]: Started cri-containerd-13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f.scope - libcontainer container 13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f. Oct 31 13:35:20.074537 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:20.095954 containerd[1582]: time="2025-10-31T13:35:20.095900329Z" level=info msg="StartContainer for \"13064aac7f5915d7d09b7e8502c24d66d3aab2987fc3133962f325371c13087f\" returns successfully" Oct 31 13:35:20.162300 containerd[1582]: time="2025-10-31T13:35:20.160897828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dnn47,Uid:a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9430f6d62db9c000e3210ed4be192d1ea8adeeb3d34f153444c88be86f8c965e\"" Oct 31 13:35:20.164790 containerd[1582]: time="2025-10-31T13:35:20.164758842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 13:35:20.384047 containerd[1582]: time="2025-10-31T13:35:20.383989036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:20.384996 containerd[1582]: time="2025-10-31T13:35:20.384937118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 13:35:20.385057 containerd[1582]: time="2025-10-31T13:35:20.384948719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 13:35:20.385248 kubelet[2741]: E1031 13:35:20.385201 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:35:20.385331 kubelet[2741]: E1031 13:35:20.385252 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:35:20.385721 kubelet[2741]: E1031 13:35:20.385409 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpp7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dnn47_calico-system(a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:20.386637 kubelet[2741]: E1031 13:35:20.386582 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dnn47" podUID="a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d" Oct 31 13:35:20.794461 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:60336.service - OpenSSH per-connection server daemon (10.0.0.1:60336). Oct 31 13:35:20.855784 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 60336 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:20.857367 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:20.861499 systemd-logind[1552]: New session 9 of user core. Oct 31 13:35:20.871466 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 13:35:20.905688 kubelet[2741]: E1031 13:35:20.905564 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dnn47" podUID="a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d" Oct 31 13:35:20.909469 kubelet[2741]: E1031 13:35:20.909435 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:20.945518 kubelet[2741]: I1031 13:35:20.945445 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zxtlg" podStartSLOduration=37.945413896 podStartE2EDuration="37.945413896s" podCreationTimestamp="2025-10-31 13:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:35:20.944336003 +0000 UTC m=+43.276885170" watchObservedRunningTime="2025-10-31 13:35:20.945413896 +0000 UTC m=+43.277963063" Oct 31 13:35:21.053590 sshd[4533]: Connection closed by 10.0.0.1 port 60336 Oct 31 13:35:21.052896 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:21.057138 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:60336.service: Deactivated successfully. Oct 31 13:35:21.059322 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 13:35:21.060353 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Oct 31 13:35:21.061635 systemd-logind[1552]: Removed session 9. Oct 31 13:35:21.735463 systemd-networkd[1488]: cali580196e5a6d: Gained IPv6LL Oct 31 13:35:21.762699 containerd[1582]: time="2025-10-31T13:35:21.762659761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-n2vln,Uid:eeb05b47-85b4-418f-a8c7-06c3a0435abf,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:35:21.763048 containerd[1582]: time="2025-10-31T13:35:21.762709285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skbn9,Uid:8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:21.863582 systemd-networkd[1488]: cali6f2e7947caf: Gained IPv6LL Oct 31 13:35:21.914039 kubelet[2741]: E1031 13:35:21.913818 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:21.917662 kubelet[2741]: E1031 13:35:21.917474 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dnn47" podUID="a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d" Oct 31 13:35:21.930385 systemd-networkd[1488]: calib298de328fa: Link UP Oct 31 13:35:21.930624 systemd-networkd[1488]: calib298de328fa: Gained carrier Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.845 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--skbn9-eth0 csi-node-driver- calico-system 8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd 740 0 2025-10-31 13:35:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-skbn9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib298de328fa [] [] }} ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.845 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.873 [INFO][4591] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" HandleID="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Workload="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.873 [INFO][4591] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" HandleID="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Workload="localhost-k8s-csi--node--driver--skbn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-skbn9", "timestamp":"2025-10-31 13:35:21.873178531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.873 [INFO][4591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.873 [INFO][4591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.873 [INFO][4591] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.883 [INFO][4591] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.888 [INFO][4591] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.897 [INFO][4591] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.899 [INFO][4591] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.902 [INFO][4591] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.902 [INFO][4591] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.904 [INFO][4591] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.910 [INFO][4591] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.918 [INFO][4591] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.918 [INFO][4591] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" host="localhost" Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.918 [INFO][4591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:21.951593 containerd[1582]: 2025-10-31 13:35:21.918 [INFO][4591] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" HandleID="k8s-pod-network.7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Workload="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.952154 containerd[1582]: 2025-10-31 13:35:21.924 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--skbn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-skbn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib298de328fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:21.952154 containerd[1582]: 2025-10-31 13:35:21.924 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.952154 containerd[1582]: 2025-10-31 13:35:21.924 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib298de328fa ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.952154 containerd[1582]: 2025-10-31 13:35:21.930 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.952154 containerd[1582]: 2025-10-31 13:35:21.931 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--skbn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d", Pod:"csi-node-driver-skbn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib298de328fa", MAC:"1e:dd:54:ec:46:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:21.952154 containerd[1582]: 2025-10-31 13:35:21.947 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" Namespace="calico-system" Pod="csi-node-driver-skbn9" WorkloadEndpoint="localhost-k8s-csi--node--driver--skbn9-eth0" Oct 31 13:35:21.975491 containerd[1582]: time="2025-10-31T13:35:21.975426363Z" level=info msg="connecting to shim 7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d" address="unix:///run/containerd/s/915d44cdfa08217ead5f208695089496f711cf8a1d5786814706ff978836ffef" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:22.002611 systemd[1]: Started cri-containerd-7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d.scope - libcontainer container 7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d. Oct 31 13:35:22.016733 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:22.029070 systemd-networkd[1488]: cali6a1acac432c: Link UP Oct 31 13:35:22.030512 systemd-networkd[1488]: cali6a1acac432c: Gained carrier Oct 31 13:35:22.053644 containerd[1582]: time="2025-10-31T13:35:22.053595300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-skbn9,Uid:8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bff7c619a1aca72e22023c8aa622debcf7980b624284e2675f535a2a979d02d\"" Oct 31 13:35:22.056171 containerd[1582]: time="2025-10-31T13:35:22.056129429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.836 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0 calico-apiserver-65687fc5c7- calico-apiserver eeb05b47-85b4-418f-a8c7-06c3a0435abf 847 0 2025-10-31 13:34:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65687fc5c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65687fc5c7-n2vln eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a1acac432c [] [] }} ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.836 [INFO][4554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.877 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" HandleID="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Workload="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.877 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" HandleID="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Workload="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3ed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65687fc5c7-n2vln", "timestamp":"2025-10-31 13:35:21.877347923 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.877 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.918 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.918 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.985 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.990 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:21.996 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.001 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.005 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.005 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.007 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.012 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.019 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.019 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" host="localhost" Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.019 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:22.059436 containerd[1582]: 2025-10-31 13:35:22.019 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" HandleID="k8s-pod-network.2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Workload="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.059916 containerd[1582]: 2025-10-31 13:35:22.022 [INFO][4554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0", GenerateName:"calico-apiserver-65687fc5c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeb05b47-85b4-418f-a8c7-06c3a0435abf", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65687fc5c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65687fc5c7-n2vln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a1acac432c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:22.059916 containerd[1582]: 2025-10-31 13:35:22.022 [INFO][4554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.059916 containerd[1582]: 2025-10-31 13:35:22.022 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a1acac432c ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.059916 containerd[1582]: 2025-10-31 13:35:22.029 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.059916 containerd[1582]: 2025-10-31 13:35:22.030 [INFO][4554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0", GenerateName:"calico-apiserver-65687fc5c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeb05b47-85b4-418f-a8c7-06c3a0435abf", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65687fc5c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef", Pod:"calico-apiserver-65687fc5c7-n2vln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a1acac432c", MAC:"6e:dc:aa:35:ef:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:22.059916 containerd[1582]: 2025-10-31 13:35:22.053 [INFO][4554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-n2vln" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--n2vln-eth0" Oct 31 13:35:22.103288 containerd[1582]: time="2025-10-31T13:35:22.102554539Z" level=info msg="connecting to shim 2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef" address="unix:///run/containerd/s/72ce9017683c785a7ee491d3d3b4505eea4d19b853b425251bc1baec23182dd5" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:22.127484 systemd[1]: Started cri-containerd-2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef.scope - libcontainer container 2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef. Oct 31 13:35:22.140787 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:22.165447 containerd[1582]: time="2025-10-31T13:35:22.165410526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-n2vln,Uid:eeb05b47-85b4-418f-a8c7-06c3a0435abf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2d6d7f315c285efca690422e1f9792781749a8e1f1d3e449bcdae8c0e275ccef\"" Oct 31 13:35:22.285154 containerd[1582]: time="2025-10-31T13:35:22.285010634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:22.286268 containerd[1582]: time="2025-10-31T13:35:22.286218574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 13:35:22.286334 containerd[1582]: time="2025-10-31T13:35:22.286287659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 13:35:22.286529 kubelet[2741]: E1031 13:35:22.286481 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:35:22.286599 kubelet[2741]: E1031 13:35:22.286541 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:35:22.286838 kubelet[2741]: E1031 13:35:22.286781 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s54xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-skbn9_calico-system(8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:22.286984 containerd[1582]: time="2025-10-31T13:35:22.286960315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:35:22.501282 containerd[1582]: time="2025-10-31T13:35:22.501222994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:22.502267 containerd[1582]: time="2025-10-31T13:35:22.502217316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:35:22.502351 containerd[1582]: time="2025-10-31T13:35:22.502245718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:35:22.502537 kubelet[2741]: E1031 13:35:22.502499 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:22.502592 kubelet[2741]: E1031 13:35:22.502552 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:22.502850 kubelet[2741]: E1031 13:35:22.502801 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvmz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65687fc5c7-n2vln_calico-apiserver(eeb05b47-85b4-418f-a8c7-06c3a0435abf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:22.504023 kubelet[2741]: E1031 13:35:22.503983 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" podUID="eeb05b47-85b4-418f-a8c7-06c3a0435abf" Oct 31 13:35:22.504170 containerd[1582]: time="2025-10-31T13:35:22.504126473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 13:35:22.720631 containerd[1582]: time="2025-10-31T13:35:22.720585294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:22.721600 containerd[1582]: time="2025-10-31T13:35:22.721561334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 13:35:22.721674 containerd[1582]: time="2025-10-31T13:35:22.721639781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 13:35:22.721911 kubelet[2741]: E1031 13:35:22.721859 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:35:22.721973 kubelet[2741]: E1031 13:35:22.721927 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:35:22.722097 kubelet[2741]: E1031 13:35:22.722060 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s54xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-skbn9_calico-system(8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:22.723301 kubelet[2741]: E1031 13:35:22.723236 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:22.760031 kubelet[2741]: E1031 13:35:22.759990 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:22.760318 containerd[1582]: time="2025-10-31T13:35:22.760285249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-d8f78,Uid:dfff0312-ad1e-473d-81aa-fca6e368f968,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:35:22.760747 containerd[1582]: time="2025-10-31T13:35:22.760717205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wr62z,Uid:dda6e16a-d67b-4626-8e4a-5373e472e2fd,Namespace:kube-system,Attempt:0,}" Oct 31 13:35:22.880396 systemd-networkd[1488]: calief22c41ad58: Link UP Oct 31 13:35:22.881449 systemd-networkd[1488]: calief22c41ad58: Gained carrier Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.806 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0 calico-apiserver-65687fc5c7- calico-apiserver dfff0312-ad1e-473d-81aa-fca6e368f968 849 0 2025-10-31 13:34:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65687fc5c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65687fc5c7-d8f78 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calief22c41ad58 [] [] }} ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.806 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" HandleID="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Workload="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" HandleID="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Workload="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65687fc5c7-d8f78", "timestamp":"2025-10-31 13:35:22.837533143 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.847 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.853 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.858 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.860 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.862 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.862 [INFO][4744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.864 [INFO][4744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0 Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.868 [INFO][4744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.874 [INFO][4744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.874 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" host="localhost" Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.874 [INFO][4744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:22.898216 containerd[1582]: 2025-10-31 13:35:22.874 [INFO][4744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" HandleID="k8s-pod-network.5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Workload="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.899049 containerd[1582]: 2025-10-31 13:35:22.877 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0", GenerateName:"calico-apiserver-65687fc5c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfff0312-ad1e-473d-81aa-fca6e368f968", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65687fc5c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65687fc5c7-d8f78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief22c41ad58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:22.899049 containerd[1582]: 2025-10-31 13:35:22.877 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.899049 containerd[1582]: 2025-10-31 13:35:22.877 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief22c41ad58 ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.899049 containerd[1582]: 2025-10-31 13:35:22.880 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.899049 containerd[1582]: 2025-10-31 13:35:22.881 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0", GenerateName:"calico-apiserver-65687fc5c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfff0312-ad1e-473d-81aa-fca6e368f968", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65687fc5c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0", Pod:"calico-apiserver-65687fc5c7-d8f78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief22c41ad58", MAC:"4a:c0:77:8e:17:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:22.899049 containerd[1582]: 2025-10-31 13:35:22.893 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" Namespace="calico-apiserver" Pod="calico-apiserver-65687fc5c7-d8f78" WorkloadEndpoint="localhost-k8s-calico--apiserver--65687fc5c7--d8f78-eth0" Oct 31 13:35:22.921120 kubelet[2741]: E1031 13:35:22.921002 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:22.921868 kubelet[2741]: E1031 13:35:22.921315 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:22.923352 kubelet[2741]: E1031 13:35:22.922856 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" podUID="eeb05b47-85b4-418f-a8c7-06c3a0435abf" Oct 31 13:35:22.934922 containerd[1582]: time="2025-10-31T13:35:22.934873975Z" level=info msg="connecting to shim 5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0" address="unix:///run/containerd/s/83a4439c1decb69e2de6430dfde7c6fe06bd3ac8137be0f7b8758799456aff95" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:22.973467 systemd[1]: Started cri-containerd-5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0.scope - libcontainer container 5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0. Oct 31 13:35:22.994434 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:23.000045 systemd-networkd[1488]: cali32ba21499f2: Link UP Oct 31 13:35:23.000830 systemd-networkd[1488]: cali32ba21499f2: Gained carrier Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.805 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wr62z-eth0 coredns-674b8bbfcf- kube-system dda6e16a-d67b-4626-8e4a-5373e472e2fd 845 0 2025-10-31 13:34:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wr62z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32ba21499f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.806 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4745] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" HandleID="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Workload="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4745] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" HandleID="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Workload="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wr62z", "timestamp":"2025-10-31 13:35:22.837533383 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.837 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.874 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.874 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.948 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.957 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.966 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.969 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.972 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.972 [INFO][4745] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.975 [INFO][4745] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.982 [INFO][4745] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.991 [INFO][4745] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.991 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" host="localhost" Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.992 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:23.026185 containerd[1582]: 2025-10-31 13:35:22.992 [INFO][4745] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" HandleID="k8s-pod-network.3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Workload="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.026728 containerd[1582]: 2025-10-31 13:35:22.995 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wr62z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dda6e16a-d67b-4626-8e4a-5373e472e2fd", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wr62z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32ba21499f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:23.026728 containerd[1582]: 2025-10-31 13:35:22.996 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.026728 containerd[1582]: 2025-10-31 13:35:22.996 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32ba21499f2 ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.026728 containerd[1582]: 2025-10-31 13:35:23.000 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.026728 containerd[1582]: 2025-10-31 13:35:23.001 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wr62z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dda6e16a-d67b-4626-8e4a-5373e472e2fd", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf", Pod:"coredns-674b8bbfcf-wr62z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32ba21499f2", MAC:"7a:7a:a3:1f:08:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:23.026728 containerd[1582]: 2025-10-31 13:35:23.015 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" Namespace="kube-system" Pod="coredns-674b8bbfcf-wr62z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wr62z-eth0" Oct 31 13:35:23.038370 containerd[1582]: time="2025-10-31T13:35:23.037952934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65687fc5c7-d8f78,Uid:dfff0312-ad1e-473d-81aa-fca6e368f968,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5797bb7172ca7e8610b17722d4b3020c4caac87fdf63d13d2c584eba2bfd9df0\"" Oct 31 13:35:23.045492 containerd[1582]: time="2025-10-31T13:35:23.041511902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:35:23.053537 containerd[1582]: time="2025-10-31T13:35:23.053487388Z" level=info msg="connecting to shim 3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf" address="unix:///run/containerd/s/8121ab405395ad8f4407120b771bbcc0355a193a5e9f0849fa4f929d02482a96" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:23.079481 systemd[1]: Started cri-containerd-3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf.scope - libcontainer container 3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf. Oct 31 13:35:23.091741 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:23.120133 containerd[1582]: time="2025-10-31T13:35:23.120091165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wr62z,Uid:dda6e16a-d67b-4626-8e4a-5373e472e2fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf\"" Oct 31 13:35:23.120917 kubelet[2741]: E1031 13:35:23.120892 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:23.130034 containerd[1582]: time="2025-10-31T13:35:23.129915358Z" level=info msg="CreateContainer within sandbox \"3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 13:35:23.136339 containerd[1582]: time="2025-10-31T13:35:23.136295313Z" level=info msg="Container c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:35:23.141020 containerd[1582]: time="2025-10-31T13:35:23.140955329Z" level=info msg="CreateContainer within sandbox \"3c5c864f197874a33c78d9b7516c86bc8ae23d086f8951a11a54a742a787dfbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2\"" Oct 31 13:35:23.141536 containerd[1582]: time="2025-10-31T13:35:23.141512094Z" level=info msg="StartContainer for \"c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2\"" Oct 31 13:35:23.142433 containerd[1582]: time="2025-10-31T13:35:23.142388404Z" level=info msg="connecting to shim c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2" address="unix:///run/containerd/s/8121ab405395ad8f4407120b771bbcc0355a193a5e9f0849fa4f929d02482a96" protocol=ttrpc version=3 Oct 31 13:35:23.167497 systemd[1]: Started cri-containerd-c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2.scope - libcontainer container c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2. Oct 31 13:35:23.195346 containerd[1582]: time="2025-10-31T13:35:23.195300835Z" level=info msg="StartContainer for \"c193a0180895c5e3b3c64faa1cbca2bd341ea900cd0d97a8c30ce0427bb9f5f2\" returns successfully" Oct 31 13:35:23.254063 containerd[1582]: time="2025-10-31T13:35:23.253614703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:23.256436 containerd[1582]: time="2025-10-31T13:35:23.256390127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:35:23.256527 containerd[1582]: time="2025-10-31T13:35:23.256483774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:35:23.256884 kubelet[2741]: E1031 13:35:23.256810 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:23.256930 kubelet[2741]: E1031 13:35:23.256887 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:23.257104 kubelet[2741]: E1031 13:35:23.257059 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl8dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65687fc5c7-d8f78_calico-apiserver(dfff0312-ad1e-473d-81aa-fca6e368f968): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:23.258625 kubelet[2741]: E1031 13:35:23.258567 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" podUID="dfff0312-ad1e-473d-81aa-fca6e368f968" Oct 31 13:35:23.271582 systemd-networkd[1488]: cali6a1acac432c: Gained IPv6LL Oct 31 13:35:23.759700 containerd[1582]: time="2025-10-31T13:35:23.759651390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbbd67f67-xbmvj,Uid:7e03f713-e3f7-401b-a028-09875138e499,Namespace:calico-system,Attempt:0,}" Oct 31 13:35:23.890152 systemd-networkd[1488]: cali32f289c8b96: Link UP Oct 31 13:35:23.890513 systemd-networkd[1488]: cali32f289c8b96: Gained carrier Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.801 [INFO][4913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0 calico-kube-controllers-7bbbd67f67- calico-system 7e03f713-e3f7-401b-a028-09875138e499 846 0 2025-10-31 13:35:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bbbd67f67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bbbd67f67-xbmvj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali32f289c8b96 [] [] }} ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.801 [INFO][4913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.838 [INFO][4927] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" HandleID="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Workload="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.839 [INFO][4927] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" HandleID="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Workload="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bbbd67f67-xbmvj", "timestamp":"2025-10-31 13:35:23.838824301 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.839 [INFO][4927] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.839 [INFO][4927] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.839 [INFO][4927] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.849 [INFO][4927] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.855 [INFO][4927] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.862 [INFO][4927] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.864 [INFO][4927] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.867 [INFO][4927] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.868 [INFO][4927] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.870 [INFO][4927] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8 Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.874 [INFO][4927] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.881 [INFO][4927] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.881 [INFO][4927] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" host="localhost" Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.881 [INFO][4927] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:35:23.900251 containerd[1582]: 2025-10-31 13:35:23.881 [INFO][4927] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" HandleID="k8s-pod-network.f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Workload="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.901061 containerd[1582]: 2025-10-31 13:35:23.884 [INFO][4913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0", GenerateName:"calico-kube-controllers-7bbbd67f67-", Namespace:"calico-system", SelfLink:"", UID:"7e03f713-e3f7-401b-a028-09875138e499", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbbd67f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bbbd67f67-xbmvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32f289c8b96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:23.901061 containerd[1582]: 2025-10-31 13:35:23.884 [INFO][4913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.901061 containerd[1582]: 2025-10-31 13:35:23.884 [INFO][4913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32f289c8b96 ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.901061 containerd[1582]: 2025-10-31 13:35:23.886 [INFO][4913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.901061 containerd[1582]: 2025-10-31 13:35:23.887 [INFO][4913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0", GenerateName:"calico-kube-controllers-7bbbd67f67-", Namespace:"calico-system", SelfLink:"", UID:"7e03f713-e3f7-401b-a028-09875138e499", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 35, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbbd67f67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8", Pod:"calico-kube-controllers-7bbbd67f67-xbmvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32f289c8b96", MAC:"ee:97:27:3a:f0:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:35:23.901061 containerd[1582]: 2025-10-31 13:35:23.896 [INFO][4913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" Namespace="calico-system" Pod="calico-kube-controllers-7bbbd67f67-xbmvj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bbbd67f67--xbmvj-eth0" Oct 31 13:35:23.911568 systemd-networkd[1488]: calib298de328fa: Gained IPv6LL Oct 31 13:35:23.926233 kubelet[2741]: E1031 13:35:23.926195 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:23.930410 kubelet[2741]: E1031 13:35:23.930188 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" podUID="dfff0312-ad1e-473d-81aa-fca6e368f968" Oct 31 13:35:23.930827 kubelet[2741]: E1031 13:35:23.930786 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:23.932478 kubelet[2741]: E1031 13:35:23.932431 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" podUID="eeb05b47-85b4-418f-a8c7-06c3a0435abf" Oct 31 13:35:23.937756 containerd[1582]: time="2025-10-31T13:35:23.937643358Z" level=info msg="connecting to shim f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8" address="unix:///run/containerd/s/9601715db5753697b62660569b269d953f14e6a3f233b17ae385a426ebc22a51" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:35:23.975975 kubelet[2741]: I1031 13:35:23.975657 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wr62z" podStartSLOduration=40.975639945 podStartE2EDuration="40.975639945s" podCreationTimestamp="2025-10-31 13:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:35:23.942648162 +0000 UTC m=+46.275197329" watchObservedRunningTime="2025-10-31 13:35:23.975639945 +0000 UTC m=+46.308189112" Oct 31 13:35:23.982396 systemd[1]: Started cri-containerd-f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8.scope - libcontainer container f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8. Oct 31 13:35:24.014871 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:35:24.047321 containerd[1582]: time="2025-10-31T13:35:24.047240447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbbd67f67-xbmvj,Uid:7e03f713-e3f7-401b-a028-09875138e499,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1a1638665eac8902175ca8cc2bef6b6e6edc28288d130ddca87e87f8d8911f8\"" Oct 31 13:35:24.048748 containerd[1582]: time="2025-10-31T13:35:24.048721724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 13:35:24.251158 containerd[1582]: time="2025-10-31T13:35:24.251116042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:24.252085 containerd[1582]: time="2025-10-31T13:35:24.252048116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 13:35:24.252160 containerd[1582]: time="2025-10-31T13:35:24.252123681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 13:35:24.252329 kubelet[2741]: E1031 13:35:24.252293 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:35:24.252393 kubelet[2741]: E1031 13:35:24.252343 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:35:24.252514 kubelet[2741]: E1031 13:35:24.252470 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t822,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bbbd67f67-xbmvj_calico-system(7e03f713-e3f7-401b-a028-09875138e499): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:24.254197 kubelet[2741]: E1031 13:35:24.253724 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" podUID="7e03f713-e3f7-401b-a028-09875138e499" Oct 31 13:35:24.551514 systemd-networkd[1488]: calief22c41ad58: Gained IPv6LL Oct 31 13:35:24.551897 systemd-networkd[1488]: cali32ba21499f2: Gained IPv6LL Oct 31 13:35:24.932665 kubelet[2741]: E1031 13:35:24.932561 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:24.934558 kubelet[2741]: E1031 13:35:24.934517 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" podUID="7e03f713-e3f7-401b-a028-09875138e499" Oct 31 13:35:24.934679 kubelet[2741]: E1031 13:35:24.934630 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" podUID="dfff0312-ad1e-473d-81aa-fca6e368f968" Oct 31 13:35:25.640449 systemd-networkd[1488]: cali32f289c8b96: Gained IPv6LL Oct 31 13:35:25.935437 kubelet[2741]: E1031 13:35:25.935058 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:25.935437 kubelet[2741]: E1031 13:35:25.935035 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" podUID="7e03f713-e3f7-401b-a028-09875138e499" Oct 31 13:35:26.076198 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:60342.service - OpenSSH per-connection server daemon (10.0.0.1:60342). Oct 31 13:35:26.145494 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 60342 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:26.146800 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:26.151219 systemd-logind[1552]: New session 10 of user core. Oct 31 13:35:26.158410 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 13:35:26.293523 sshd[4998]: Connection closed by 10.0.0.1 port 60342 Oct 31 13:35:26.294227 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:26.303148 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:60342.service: Deactivated successfully. Oct 31 13:35:26.304848 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 13:35:26.306124 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Oct 31 13:35:26.309246 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). Oct 31 13:35:26.310720 systemd-logind[1552]: Removed session 10. Oct 31 13:35:26.377717 sshd[5015]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:26.378905 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:26.382860 systemd-logind[1552]: New session 11 of user core. Oct 31 13:35:26.389495 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 13:35:26.506427 sshd[5019]: Connection closed by 10.0.0.1 port 60344 Oct 31 13:35:26.509160 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:26.521741 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:60344.service: Deactivated successfully. Oct 31 13:35:26.524982 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 13:35:26.527380 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Oct 31 13:35:26.531962 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:60354.service - OpenSSH per-connection server daemon (10.0.0.1:60354). Oct 31 13:35:26.532885 systemd-logind[1552]: Removed session 11. Oct 31 13:35:26.585991 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 60354 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:26.587477 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:26.590009 kubelet[2741]: I1031 13:35:26.589979 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 13:35:26.590617 kubelet[2741]: E1031 13:35:26.590588 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:26.593185 systemd-logind[1552]: New session 12 of user core. Oct 31 13:35:26.600437 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 13:35:26.732944 sshd[5033]: Connection closed by 10.0.0.1 port 60354 Oct 31 13:35:26.733578 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:26.735607 containerd[1582]: time="2025-10-31T13:35:26.735566563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65\" id:\"41d8ac117fd78cbf6437436a2bfb43eadc608b4a0c02d944b182673d591c0721\" pid:5054 exit_status:1 exited_at:{seconds:1761917726 nanos:735161332}" Oct 31 13:35:26.738715 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:60354.service: Deactivated successfully. Oct 31 13:35:26.740829 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 13:35:26.742485 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Oct 31 13:35:26.744392 systemd-logind[1552]: Removed session 12. Oct 31 13:35:26.848237 containerd[1582]: time="2025-10-31T13:35:26.848114796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c4f075c0e14172932645d8cb7bba58b93453c69d74b1a641f331dfbf69b65\" id:\"604e18d7de838bcbbfb8d607131959c2b97ee439e67dcba0f94587c7b6a94385\" pid:5082 exit_status:1 exited_at:{seconds:1761917726 nanos:847809053}" Oct 31 13:35:28.760035 containerd[1582]: time="2025-10-31T13:35:28.759871538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 13:35:28.968956 containerd[1582]: time="2025-10-31T13:35:28.968863940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:28.970560 containerd[1582]: time="2025-10-31T13:35:28.970523942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 13:35:28.970651 containerd[1582]: time="2025-10-31T13:35:28.970611268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 13:35:28.970833 kubelet[2741]: E1031 13:35:28.970798 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:35:28.971334 kubelet[2741]: E1031 13:35:28.971120 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:35:28.971334 kubelet[2741]: E1031 13:35:28.971287 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2bf68f51896144188649829c473f9a2e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6875d8f7b6-5d9sp_calico-system(0d0fe36f-47b0-435c-890e-97fd7f68acbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:28.973539 containerd[1582]: time="2025-10-31T13:35:28.973514881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 13:35:29.177127 containerd[1582]: time="2025-10-31T13:35:29.176993505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:29.178108 containerd[1582]: time="2025-10-31T13:35:29.178067462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 13:35:29.178182 containerd[1582]: time="2025-10-31T13:35:29.178154269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 13:35:29.178575 kubelet[2741]: E1031 13:35:29.178322 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:35:29.178575 kubelet[2741]: E1031 13:35:29.178378 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:35:29.178575 kubelet[2741]: E1031 13:35:29.178525 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6875d8f7b6-5d9sp_calico-system(0d0fe36f-47b0-435c-890e-97fd7f68acbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:29.179717 kubelet[2741]: E1031 13:35:29.179669 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6875d8f7b6-5d9sp" podUID="0d0fe36f-47b0-435c-890e-97fd7f68acbd" Oct 31 13:35:31.744884 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:41912.service - OpenSSH per-connection server daemon (10.0.0.1:41912). Oct 31 13:35:31.801726 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 41912 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:31.802948 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:31.807334 systemd-logind[1552]: New session 13 of user core. Oct 31 13:35:31.814439 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 13:35:31.888199 sshd[5108]: Connection closed by 10.0.0.1 port 41912 Oct 31 13:35:31.889544 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:31.900772 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:41912.service: Deactivated successfully. Oct 31 13:35:31.902426 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 13:35:31.903113 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Oct 31 13:35:31.908563 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:41928.service - OpenSSH per-connection server daemon (10.0.0.1:41928). Oct 31 13:35:31.909253 systemd-logind[1552]: Removed session 13. Oct 31 13:35:31.967423 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 41928 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:31.968471 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:31.972315 systemd-logind[1552]: New session 14 of user core. Oct 31 13:35:31.984421 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 13:35:32.127791 sshd[5125]: Connection closed by 10.0.0.1 port 41928 Oct 31 13:35:32.128652 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:32.137330 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:41928.service: Deactivated successfully. Oct 31 13:35:32.138946 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 13:35:32.139726 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Oct 31 13:35:32.142054 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:41942.service - OpenSSH per-connection server daemon (10.0.0.1:41942). Oct 31 13:35:32.142947 systemd-logind[1552]: Removed session 14. Oct 31 13:35:32.221440 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 41942 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:32.222742 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:32.226671 systemd-logind[1552]: New session 15 of user core. Oct 31 13:35:32.237430 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 13:35:32.738202 sshd[5140]: Connection closed by 10.0.0.1 port 41942 Oct 31 13:35:32.738671 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:32.759688 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:41942.service: Deactivated successfully. Oct 31 13:35:32.761452 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 13:35:32.765782 containerd[1582]: time="2025-10-31T13:35:32.765748721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 13:35:32.766733 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Oct 31 13:35:32.776866 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:41958.service - OpenSSH per-connection server daemon (10.0.0.1:41958). Oct 31 13:35:32.777589 systemd-logind[1552]: Removed session 15. Oct 31 13:35:32.831122 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 41958 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:32.832152 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:32.835831 systemd-logind[1552]: New session 16 of user core. Oct 31 13:35:32.844430 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 13:35:32.971230 containerd[1582]: time="2025-10-31T13:35:32.971189594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:32.972121 containerd[1582]: time="2025-10-31T13:35:32.972033412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 13:35:32.972121 containerd[1582]: time="2025-10-31T13:35:32.972075815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 13:35:32.972363 kubelet[2741]: E1031 13:35:32.972324 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:35:32.972938 kubelet[2741]: E1031 13:35:32.972601 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:35:32.972938 kubelet[2741]: E1031 13:35:32.972745 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpp7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dnn47_calico-system(a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:32.973967 kubelet[2741]: E1031 13:35:32.973934 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dnn47" podUID="a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d" Oct 31 13:35:33.078917 sshd[5164]: Connection closed by 10.0.0.1 port 41958 Oct 31 13:35:33.079589 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:33.093724 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:41958.service: Deactivated successfully. Oct 31 13:35:33.096068 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 13:35:33.097206 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Oct 31 13:35:33.099256 systemd-logind[1552]: Removed session 16. Oct 31 13:35:33.101789 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:41964.service - OpenSSH per-connection server daemon (10.0.0.1:41964). Oct 31 13:35:33.156064 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 41964 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:33.158982 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:33.164182 systemd-logind[1552]: New session 17 of user core. Oct 31 13:35:33.173447 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 13:35:33.284840 sshd[5178]: Connection closed by 10.0.0.1 port 41964 Oct 31 13:35:33.285179 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:33.288956 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:41964.service: Deactivated successfully. Oct 31 13:35:33.291007 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 13:35:33.292414 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Oct 31 13:35:33.293394 systemd-logind[1552]: Removed session 17. Oct 31 13:35:35.760736 containerd[1582]: time="2025-10-31T13:35:35.760462049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 13:35:35.966796 containerd[1582]: time="2025-10-31T13:35:35.966740874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:35.967617 containerd[1582]: time="2025-10-31T13:35:35.967579489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 13:35:35.967663 containerd[1582]: time="2025-10-31T13:35:35.967638813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 13:35:35.967948 kubelet[2741]: E1031 13:35:35.967778 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:35:35.967948 kubelet[2741]: E1031 13:35:35.967824 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:35:35.968321 kubelet[2741]: E1031 13:35:35.967990 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s54xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-skbn9_calico-system(8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:35.968424 containerd[1582]: time="2025-10-31T13:35:35.968059721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:35:36.183063 containerd[1582]: time="2025-10-31T13:35:36.182933013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:36.184066 containerd[1582]: time="2025-10-31T13:35:36.184020645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:35:36.184242 containerd[1582]: time="2025-10-31T13:35:36.184057167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:35:36.184305 kubelet[2741]: E1031 13:35:36.184241 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:36.184353 kubelet[2741]: E1031 13:35:36.184313 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:36.185500 kubelet[2741]: E1031 13:35:36.184521 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl8dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65687fc5c7-d8f78_calico-apiserver(dfff0312-ad1e-473d-81aa-fca6e368f968): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:36.185646 containerd[1582]: time="2025-10-31T13:35:36.184599642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:35:36.185730 kubelet[2741]: E1031 13:35:36.185687 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-d8f78" podUID="dfff0312-ad1e-473d-81aa-fca6e368f968" Oct 31 13:35:36.387211 containerd[1582]: time="2025-10-31T13:35:36.387154143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:36.388085 containerd[1582]: time="2025-10-31T13:35:36.388047842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:35:36.388167 containerd[1582]: time="2025-10-31T13:35:36.388124527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:35:36.388357 kubelet[2741]: E1031 13:35:36.388289 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:36.388357 kubelet[2741]: E1031 13:35:36.388339 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:35:36.388745 kubelet[2741]: E1031 13:35:36.388689 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvmz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65687fc5c7-n2vln_calico-apiserver(eeb05b47-85b4-418f-a8c7-06c3a0435abf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:36.388857 containerd[1582]: time="2025-10-31T13:35:36.388835693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 13:35:36.389915 kubelet[2741]: E1031 13:35:36.389885 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" podUID="eeb05b47-85b4-418f-a8c7-06c3a0435abf" Oct 31 13:35:36.604385 containerd[1582]: time="2025-10-31T13:35:36.604340842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:36.605168 containerd[1582]: time="2025-10-31T13:35:36.605115893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 13:35:36.605234 containerd[1582]: time="2025-10-31T13:35:36.605163936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 13:35:36.605410 kubelet[2741]: E1031 13:35:36.605373 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:35:36.605467 kubelet[2741]: E1031 13:35:36.605425 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:35:36.605623 kubelet[2741]: E1031 13:35:36.605545 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s54xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-skbn9_calico-system(8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:36.606677 kubelet[2741]: E1031 13:35:36.606644 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-skbn9" podUID="8ccd0d25-38f0-4382-b6b6-b6b5dfa955fd" Oct 31 13:35:37.763895 containerd[1582]: time="2025-10-31T13:35:37.763858120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 13:35:37.982864 containerd[1582]: time="2025-10-31T13:35:37.982822096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:35:37.985394 containerd[1582]: time="2025-10-31T13:35:37.985285496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 13:35:37.985394 containerd[1582]: time="2025-10-31T13:35:37.985356461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 13:35:37.985578 kubelet[2741]: E1031 13:35:37.985485 2741 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:35:37.985578 kubelet[2741]: E1031 13:35:37.985533 2741 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:35:37.989033 kubelet[2741]: E1031 13:35:37.988977 2741 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t822,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bbbd67f67-xbmvj_calico-system(7e03f713-e3f7-401b-a028-09875138e499): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 13:35:37.990364 kubelet[2741]: E1031 13:35:37.990154 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bbbd67f67-xbmvj" podUID="7e03f713-e3f7-401b-a028-09875138e499" Oct 31 13:35:38.305235 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:41978.service - OpenSSH per-connection server daemon (10.0.0.1:41978). Oct 31 13:35:38.366988 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 41978 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:38.368173 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:38.371785 systemd-logind[1552]: New session 18 of user core. Oct 31 13:35:38.386425 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 13:35:38.487630 sshd[5206]: Connection closed by 10.0.0.1 port 41978 Oct 31 13:35:38.488358 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:38.491833 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:41978.service: Deactivated successfully. Oct 31 13:35:38.493624 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 13:35:38.494830 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Oct 31 13:35:38.495758 systemd-logind[1552]: Removed session 18. Oct 31 13:35:42.760042 kubelet[2741]: E1031 13:35:42.759977 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6875d8f7b6-5d9sp" podUID="0d0fe36f-47b0-435c-890e-97fd7f68acbd" Oct 31 13:35:43.503549 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:60450.service - OpenSSH per-connection server daemon (10.0.0.1:60450). Oct 31 13:35:43.564670 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 60450 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:43.566315 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:43.572393 systemd-logind[1552]: New session 19 of user core. Oct 31 13:35:43.581458 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 13:35:43.670782 sshd[5222]: Connection closed by 10.0.0.1 port 60450 Oct 31 13:35:43.671191 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:43.674438 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:60450.service: Deactivated successfully. Oct 31 13:35:43.677648 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 13:35:43.680205 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Oct 31 13:35:43.684047 systemd-logind[1552]: Removed session 19. Oct 31 13:35:43.759988 kubelet[2741]: E1031 13:35:43.759778 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dnn47" podUID="a92d4cd4-ad4d-4e88-ae10-529f08ae8b8d" Oct 31 13:35:47.760430 kubelet[2741]: E1031 13:35:47.760132 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:35:47.761966 kubelet[2741]: E1031 13:35:47.761913 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65687fc5c7-n2vln" podUID="eeb05b47-85b4-418f-a8c7-06c3a0435abf" Oct 31 13:35:48.685595 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:60460.service - OpenSSH per-connection server daemon (10.0.0.1:60460). Oct 31 13:35:48.732669 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 60460 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:35:48.733808 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:35:48.738026 systemd-logind[1552]: New session 20 of user core. Oct 31 13:35:48.745432 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 13:35:48.815747 sshd[5244]: Connection closed by 10.0.0.1 port 60460 Oct 31 13:35:48.816061 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Oct 31 13:35:48.819681 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:60460.service: Deactivated successfully. Oct 31 13:35:48.821422 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 13:35:48.822002 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Oct 31 13:35:48.822997 systemd-logind[1552]: Removed session 20.