Oct 13 00:11:49.756011 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 00:11:49.756032 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Oct 12 22:32:01 -00 2025 Oct 13 00:11:49.756041 kernel: KASLR enabled Oct 13 00:11:49.756047 kernel: efi: EFI v2.7 by EDK II Oct 13 00:11:49.756053 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 13 00:11:49.756058 kernel: random: crng init done Oct 13 00:11:49.756065 kernel: secureboot: Secure boot disabled Oct 13 00:11:49.756070 kernel: ACPI: Early table checksum verification disabled Oct 13 00:11:49.756076 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 13 00:11:49.756083 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 13 00:11:49.756089 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756094 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756100 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756106 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756113 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756120 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756126 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756132 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756138 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:11:49.756144 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 13 00:11:49.756150 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 00:11:49.756156 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:11:49.756162 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 13 00:11:49.756168 kernel: Zone ranges: Oct 13 00:11:49.756173 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:11:49.756181 kernel: DMA32 empty Oct 13 00:11:49.756187 kernel: Normal empty Oct 13 00:11:49.756192 kernel: Device empty Oct 13 00:11:49.756198 kernel: Movable zone start for each node Oct 13 00:11:49.756204 kernel: Early memory node ranges Oct 13 00:11:49.756210 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 13 00:11:49.756216 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 13 00:11:49.756222 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 13 00:11:49.756228 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 13 00:11:49.756258 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 13 00:11:49.756265 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 13 00:11:49.756270 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 13 00:11:49.756278 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 13 00:11:49.756285 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 13 00:11:49.756291 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 13 00:11:49.756299 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 13 00:11:49.756305 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 13 00:11:49.756312 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 13 00:11:49.756319 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 00:11:49.756326 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 13 00:11:49.756332 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 13 00:11:49.756338 kernel: psci: probing for conduit method from ACPI. Oct 13 00:11:49.756344 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 00:11:49.756351 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 00:11:49.756357 kernel: psci: Trusted OS migration not required Oct 13 00:11:49.756363 kernel: psci: SMC Calling Convention v1.1 Oct 13 00:11:49.756370 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 00:11:49.756376 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 00:11:49.756384 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 00:11:49.756390 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 13 00:11:49.756397 kernel: Detected PIPT I-cache on CPU0 Oct 13 00:11:49.756403 kernel: CPU features: detected: GIC system register CPU interface Oct 13 00:11:49.756409 kernel: CPU features: detected: Spectre-v4 Oct 13 00:11:49.756416 kernel: CPU features: detected: Spectre-BHB Oct 13 00:11:49.756422 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 00:11:49.756428 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 00:11:49.756435 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 00:11:49.756441 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 00:11:49.756447 kernel: alternatives: applying boot alternatives Oct 13 00:11:49.756455 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:11:49.756463 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 00:11:49.756469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 00:11:49.756476 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 00:11:49.756482 kernel: Fallback order for Node 0: 0 Oct 13 00:11:49.756488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 13 00:11:49.756494 kernel: Policy zone: DMA Oct 13 00:11:49.756501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 00:11:49.756507 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 13 00:11:49.756513 kernel: software IO TLB: area num 4. Oct 13 00:11:49.756519 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 13 00:11:49.756526 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 13 00:11:49.756533 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 00:11:49.756540 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 00:11:49.756547 kernel: rcu: RCU event tracing is enabled. Oct 13 00:11:49.756553 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 00:11:49.756559 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 00:11:49.756566 kernel: Tracing variant of Tasks RCU enabled. Oct 13 00:11:49.756572 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 00:11:49.756579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 00:11:49.756585 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:11:49.756591 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:11:49.756598 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 00:11:49.756605 kernel: GICv3: 256 SPIs implemented Oct 13 00:11:49.756612 kernel: GICv3: 0 Extended SPIs implemented Oct 13 00:11:49.756618 kernel: Root IRQ handler: gic_handle_irq Oct 13 00:11:49.756627 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 00:11:49.756636 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 00:11:49.756646 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 00:11:49.756653 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 00:11:49.756660 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 13 00:11:49.756666 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 13 00:11:49.756673 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 13 00:11:49.756680 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 13 00:11:49.756686 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 00:11:49.756694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:11:49.756701 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 00:11:49.756708 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 00:11:49.756720 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 00:11:49.756727 kernel: arm-pv: using stolen time PV Oct 13 00:11:49.756734 kernel: Console: colour dummy device 80x25 Oct 13 00:11:49.756740 kernel: ACPI: Core revision 20240827 Oct 13 00:11:49.756748 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 00:11:49.756754 kernel: pid_max: default: 32768 minimum: 301 Oct 13 00:11:49.756761 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 00:11:49.756769 kernel: landlock: Up and running. Oct 13 00:11:49.756776 kernel: SELinux: Initializing. Oct 13 00:11:49.756782 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:11:49.756789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:11:49.756796 kernel: rcu: Hierarchical SRCU implementation. Oct 13 00:11:49.756803 kernel: rcu: Max phase no-delay instances is 400. Oct 13 00:11:49.756810 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 00:11:49.756816 kernel: Remapping and enabling EFI services. Oct 13 00:11:49.756823 kernel: smp: Bringing up secondary CPUs ... Oct 13 00:11:49.756836 kernel: Detected PIPT I-cache on CPU1 Oct 13 00:11:49.756843 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 00:11:49.756851 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 13 00:11:49.756859 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:11:49.756866 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 00:11:49.756873 kernel: Detected PIPT I-cache on CPU2 Oct 13 00:11:49.756880 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 13 00:11:49.756887 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 13 00:11:49.756896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:11:49.756902 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 13 00:11:49.756909 kernel: Detected PIPT I-cache on CPU3 Oct 13 00:11:49.756919 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 13 00:11:49.756928 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 13 00:11:49.756937 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 00:11:49.756944 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 13 00:11:49.756951 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 00:11:49.756958 kernel: SMP: Total of 4 processors activated. Oct 13 00:11:49.756967 kernel: CPU: All CPU(s) started at EL1 Oct 13 00:11:49.756975 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 00:11:49.756983 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 00:11:49.756990 kernel: CPU features: detected: Common not Private translations Oct 13 00:11:49.756997 kernel: CPU features: detected: CRC32 instructions Oct 13 00:11:49.757004 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 00:11:49.757011 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 00:11:49.757018 kernel: CPU features: detected: LSE atomic instructions Oct 13 00:11:49.757025 kernel: CPU features: detected: Privileged Access Never Oct 13 00:11:49.757033 kernel: CPU features: detected: RAS Extension Support Oct 13 00:11:49.757040 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 00:11:49.757047 kernel: alternatives: applying system-wide alternatives Oct 13 00:11:49.757054 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 13 00:11:49.757062 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Oct 13 00:11:49.757069 kernel: devtmpfs: initialized Oct 13 00:11:49.757076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 00:11:49.757083 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 00:11:49.757090 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 00:11:49.757099 kernel: 0 pages in range for non-PLT usage Oct 13 00:11:49.757106 kernel: 508560 pages in range for PLT usage Oct 13 00:11:49.757113 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 00:11:49.757120 kernel: SMBIOS 3.0.0 present. Oct 13 00:11:49.757127 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 13 00:11:49.757134 kernel: DMI: Memory slots populated: 1/1 Oct 13 00:11:49.757141 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 00:11:49.757148 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 00:11:49.757156 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 00:11:49.757164 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 00:11:49.757171 kernel: audit: initializing netlink subsys (disabled) Oct 13 00:11:49.757178 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Oct 13 00:11:49.757185 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 00:11:49.757192 kernel: cpuidle: using governor menu Oct 13 00:11:49.757199 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 00:11:49.757206 kernel: ASID allocator initialised with 32768 entries Oct 13 00:11:49.757212 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 00:11:49.757219 kernel: Serial: AMBA PL011 UART driver Oct 13 00:11:49.757228 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 00:11:49.757247 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 00:11:49.757258 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 00:11:49.757265 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 00:11:49.757272 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 00:11:49.757279 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 00:11:49.757285 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 00:11:49.757292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 00:11:49.757299 kernel: ACPI: Added _OSI(Module Device) Oct 13 00:11:49.757308 kernel: ACPI: Added _OSI(Processor Device) Oct 13 00:11:49.757315 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 00:11:49.757322 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 00:11:49.757329 kernel: ACPI: Interpreter enabled Oct 13 00:11:49.757335 kernel: ACPI: Using GIC for interrupt routing Oct 13 00:11:49.757342 kernel: ACPI: MCFG table detected, 1 entries Oct 13 00:11:49.757349 kernel: ACPI: CPU0 has been hot-added Oct 13 00:11:49.757356 kernel: ACPI: CPU1 has been hot-added Oct 13 00:11:49.757363 kernel: ACPI: CPU2 has been hot-added Oct 13 00:11:49.757370 kernel: ACPI: CPU3 has been hot-added Oct 13 00:11:49.757378 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 00:11:49.757385 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 00:11:49.757392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 00:11:49.757538 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 00:11:49.757603 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 00:11:49.757661 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 00:11:49.757729 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 00:11:49.757793 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 00:11:49.757803 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 00:11:49.757810 kernel: PCI host bridge to bus 0000:00 Oct 13 00:11:49.757876 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 00:11:49.757929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 00:11:49.757980 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 00:11:49.758031 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 00:11:49.758110 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 00:11:49.758180 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 00:11:49.758254 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 13 00:11:49.758329 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 13 00:11:49.758389 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 00:11:49.758447 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 00:11:49.758506 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 13 00:11:49.758567 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 13 00:11:49.758620 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 00:11:49.758672 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 00:11:49.758732 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 00:11:49.758742 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 00:11:49.758750 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 00:11:49.758757 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 00:11:49.758768 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 00:11:49.758775 kernel: iommu: Default domain type: Translated Oct 13 00:11:49.758782 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 00:11:49.758790 kernel: efivars: Registered efivars operations Oct 13 00:11:49.758797 kernel: vgaarb: loaded Oct 13 00:11:49.758803 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 00:11:49.758810 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 00:11:49.758817 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 00:11:49.758824 kernel: pnp: PnP ACPI init Oct 13 00:11:49.758891 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 00:11:49.758902 kernel: pnp: PnP ACPI: found 1 devices Oct 13 00:11:49.758908 kernel: NET: Registered PF_INET protocol family Oct 13 00:11:49.758921 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 00:11:49.758928 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 00:11:49.758936 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 00:11:49.758943 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 00:11:49.758950 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 00:11:49.758958 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 00:11:49.758966 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:11:49.758973 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:11:49.758981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 00:11:49.758988 kernel: PCI: CLS 0 bytes, default 64 Oct 13 00:11:49.758995 kernel: kvm [1]: HYP mode not available Oct 13 00:11:49.759002 kernel: Initialise system trusted keyrings Oct 13 00:11:49.759009 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 00:11:49.759017 kernel: Key type asymmetric registered Oct 13 00:11:49.759025 kernel: Asymmetric key parser 'x509' registered Oct 13 00:11:49.759033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 00:11:49.759040 kernel: io scheduler mq-deadline registered Oct 13 00:11:49.759048 kernel: io scheduler kyber registered Oct 13 00:11:49.759055 kernel: io scheduler bfq registered Oct 13 00:11:49.759062 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 00:11:49.759069 kernel: ACPI: button: Power Button [PWRB] Oct 13 00:11:49.759077 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 00:11:49.759138 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 13 00:11:49.759149 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 00:11:49.759156 kernel: thunder_xcv, ver 1.0 Oct 13 00:11:49.759163 kernel: thunder_bgx, ver 1.0 Oct 13 00:11:49.759170 kernel: nicpf, ver 1.0 Oct 13 00:11:49.759177 kernel: nicvf, ver 1.0 Oct 13 00:11:49.759266 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 00:11:49.759326 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T00:11:49 UTC (1760314309) Oct 13 00:11:49.759336 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 00:11:49.759345 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 00:11:49.759352 kernel: watchdog: NMI not fully supported Oct 13 00:11:49.759359 kernel: watchdog: Hard watchdog permanently disabled Oct 13 00:11:49.759366 kernel: NET: Registered PF_INET6 protocol family Oct 13 00:11:49.759373 kernel: Segment Routing with IPv6 Oct 13 00:11:49.759380 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 00:11:49.759387 kernel: NET: Registered PF_PACKET protocol family Oct 13 00:11:49.759394 kernel: Key type dns_resolver registered Oct 13 00:11:49.759401 kernel: registered taskstats version 1 Oct 13 00:11:49.759408 kernel: Loading compiled-in X.509 certificates Oct 13 00:11:49.759417 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: b8447a1087a9e9c4d5b9d4c2f2bba5a69a74f139' Oct 13 00:11:49.759424 kernel: Demotion targets for Node 0: null Oct 13 00:11:49.759431 kernel: Key type .fscrypt registered Oct 13 00:11:49.759437 kernel: Key type fscrypt-provisioning registered Oct 13 00:11:49.759445 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 00:11:49.759452 kernel: ima: Allocated hash algorithm: sha1 Oct 13 00:11:49.759459 kernel: ima: No architecture policies found Oct 13 00:11:49.759466 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 00:11:49.759475 kernel: clk: Disabling unused clocks Oct 13 00:11:49.759482 kernel: PM: genpd: Disabling unused power domains Oct 13 00:11:49.759488 kernel: Warning: unable to open an initial console. Oct 13 00:11:49.759495 kernel: Freeing unused kernel memory: 38976K Oct 13 00:11:49.759502 kernel: Run /init as init process Oct 13 00:11:49.759509 kernel: with arguments: Oct 13 00:11:49.759516 kernel: /init Oct 13 00:11:49.759523 kernel: with environment: Oct 13 00:11:49.759530 kernel: HOME=/ Oct 13 00:11:49.759538 kernel: TERM=linux Oct 13 00:11:49.759546 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 00:11:49.759553 systemd[1]: Successfully made /usr/ read-only. Oct 13 00:11:49.759564 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:11:49.759571 systemd[1]: Detected virtualization kvm. Oct 13 00:11:49.759579 systemd[1]: Detected architecture arm64. Oct 13 00:11:49.759586 systemd[1]: Running in initrd. Oct 13 00:11:49.759593 systemd[1]: No hostname configured, using default hostname. Oct 13 00:11:49.759602 systemd[1]: Hostname set to . Oct 13 00:11:49.759609 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:11:49.759617 systemd[1]: Queued start job for default target initrd.target. Oct 13 00:11:49.759624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:11:49.759631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:11:49.759640 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 00:11:49.759647 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:11:49.759655 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 00:11:49.759665 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 00:11:49.759679 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 00:11:49.759696 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 00:11:49.759703 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:11:49.759711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:11:49.759727 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:11:49.759736 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:11:49.759744 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:11:49.759752 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:11:49.759759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:11:49.759767 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:11:49.759775 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 00:11:49.759782 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 00:11:49.759790 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:11:49.759798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:11:49.759807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:11:49.759815 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:11:49.759822 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 00:11:49.759830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:11:49.759837 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 00:11:49.759845 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 00:11:49.759853 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 00:11:49.759860 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:11:49.759869 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:11:49.759877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:11:49.759885 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 00:11:49.759893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:11:49.759901 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 00:11:49.759927 systemd-journald[244]: Collecting audit messages is disabled. Oct 13 00:11:49.759946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:11:49.759956 systemd-journald[244]: Journal started Oct 13 00:11:49.759975 systemd-journald[244]: Runtime Journal (/run/log/journal/56eaa0b13cc648ed946ef3535421daa0) is 6M, max 48.5M, 42.4M free. Oct 13 00:11:49.753310 systemd-modules-load[245]: Inserted module 'overlay' Oct 13 00:11:49.762353 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:11:49.767261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 00:11:49.768405 systemd-modules-load[245]: Inserted module 'br_netfilter' Oct 13 00:11:49.769893 kernel: Bridge firewalling registered Oct 13 00:11:49.768415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:11:49.769608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:11:49.772453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:11:49.773908 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:11:49.780542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:11:49.781617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:11:49.784521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:11:49.790706 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 00:11:49.792339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:11:49.794527 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:11:49.796645 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:11:49.798648 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:11:49.801040 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 00:11:49.803073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:11:49.820391 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:11:49.833609 systemd-resolved[288]: Positive Trust Anchors: Oct 13 00:11:49.833629 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:11:49.833660 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:11:49.838604 systemd-resolved[288]: Defaulting to hostname 'linux'. Oct 13 00:11:49.839601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:11:49.841245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:11:49.893282 kernel: SCSI subsystem initialized Oct 13 00:11:49.898254 kernel: Loading iSCSI transport class v2.0-870. Oct 13 00:11:49.905258 kernel: iscsi: registered transport (tcp) Oct 13 00:11:49.918266 kernel: iscsi: registered transport (qla4xxx) Oct 13 00:11:49.918327 kernel: QLogic iSCSI HBA Driver Oct 13 00:11:49.935685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:11:49.956140 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:11:49.957540 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:11:50.005810 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 00:11:50.007995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 00:11:50.068285 kernel: raid6: neonx8 gen() 15755 MB/s Oct 13 00:11:50.085254 kernel: raid6: neonx4 gen() 15739 MB/s Oct 13 00:11:50.102253 kernel: raid6: neonx2 gen() 13211 MB/s Oct 13 00:11:50.119253 kernel: raid6: neonx1 gen() 10423 MB/s Oct 13 00:11:50.136253 kernel: raid6: int64x8 gen() 6905 MB/s Oct 13 00:11:50.153258 kernel: raid6: int64x4 gen() 7352 MB/s Oct 13 00:11:50.170250 kernel: raid6: int64x2 gen() 6102 MB/s Oct 13 00:11:50.187437 kernel: raid6: int64x1 gen() 5052 MB/s Oct 13 00:11:50.187467 kernel: raid6: using algorithm neonx8 gen() 15755 MB/s Oct 13 00:11:50.205436 kernel: raid6: .... xor() 9841 MB/s, rmw enabled Oct 13 00:11:50.205481 kernel: raid6: using neon recovery algorithm Oct 13 00:11:50.214252 kernel: xor: measuring software checksum speed Oct 13 00:11:50.214285 kernel: 8regs : 19603 MB/sec Oct 13 00:11:50.215702 kernel: 32regs : 17496 MB/sec Oct 13 00:11:50.215721 kernel: arm64_neon : 27889 MB/sec Oct 13 00:11:50.215730 kernel: xor: using function: arm64_neon (27889 MB/sec) Oct 13 00:11:50.276276 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 00:11:50.283289 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:11:50.285690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:11:50.317509 systemd-udevd[498]: Using default interface naming scheme 'v255'. Oct 13 00:11:50.321614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:11:50.323343 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 00:11:50.354379 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Oct 13 00:11:50.379320 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:11:50.381493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:11:50.441298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:11:50.443835 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 00:11:50.494429 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 13 00:11:50.495689 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 00:11:50.511828 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 00:11:50.511869 kernel: GPT:9289727 != 19775487 Oct 13 00:11:50.511879 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 00:11:50.511678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:11:50.515254 kernel: GPT:9289727 != 19775487 Oct 13 00:11:50.515275 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 00:11:50.515285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:11:50.511808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:11:50.518799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:11:50.524105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:11:50.551846 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 00:11:50.554023 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 00:11:50.565522 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 00:11:50.566512 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 00:11:50.568338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:11:50.576982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:11:50.584174 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 00:11:50.585207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:11:50.586821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:11:50.588427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:11:50.590911 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 00:11:50.592606 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 00:11:50.617794 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:11:50.620683 disk-uuid[591]: Primary Header is updated. Oct 13 00:11:50.620683 disk-uuid[591]: Secondary Entries is updated. Oct 13 00:11:50.620683 disk-uuid[591]: Secondary Header is updated. Oct 13 00:11:50.623119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:11:50.633261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:11:51.632433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:11:51.632485 disk-uuid[599]: The operation has completed successfully. Oct 13 00:11:51.650191 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 00:11:51.650302 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 00:11:51.682383 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 00:11:51.708264 sh[610]: Success Oct 13 00:11:51.720567 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 00:11:51.720607 kernel: device-mapper: uevent: version 1.0.3 Oct 13 00:11:51.720627 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 00:11:51.728256 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 00:11:51.752024 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 00:11:51.754727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 00:11:51.771503 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 00:11:51.777835 kernel: BTRFS: device fsid e4495086-3456-43e0-be7b-4c3c53a67174 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (622) Oct 13 00:11:51.777878 kernel: BTRFS info (device dm-0): first mount of filesystem e4495086-3456-43e0-be7b-4c3c53a67174 Oct 13 00:11:51.777888 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:11:51.782251 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 00:11:51.782283 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 00:11:51.783291 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 00:11:51.784335 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:11:51.785266 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 00:11:51.786022 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 00:11:51.788606 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 00:11:51.812309 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Oct 13 00:11:51.812358 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:11:51.813893 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:11:51.816271 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:11:51.816316 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:11:51.820266 kernel: BTRFS info (device vda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:11:51.821358 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 00:11:51.823155 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 00:11:51.892453 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:11:51.895801 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:11:51.919745 ignition[694]: Ignition 2.22.0 Oct 13 00:11:51.919759 ignition[694]: Stage: fetch-offline Oct 13 00:11:51.919810 ignition[694]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:11:51.919819 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:11:51.919905 ignition[694]: parsed url from cmdline: "" Oct 13 00:11:51.919908 ignition[694]: no config URL provided Oct 13 00:11:51.919912 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:11:51.919920 ignition[694]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:11:51.919942 ignition[694]: op(1): [started] loading QEMU firmware config module Oct 13 00:11:51.919947 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 00:11:51.933795 ignition[694]: op(1): [finished] loading QEMU firmware config module Oct 13 00:11:51.934826 systemd-networkd[804]: lo: Link UP Oct 13 00:11:51.934830 systemd-networkd[804]: lo: Gained carrier Oct 13 00:11:51.935745 systemd-networkd[804]: Enumeration completed Oct 13 00:11:51.936341 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:11:51.936473 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:11:51.936477 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:11:51.937738 systemd[1]: Reached target network.target - Network. Oct 13 00:11:51.938008 systemd-networkd[804]: eth0: Link UP Oct 13 00:11:51.938388 systemd-networkd[804]: eth0: Gained carrier Oct 13 00:11:51.938400 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:11:51.966304 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:11:51.984202 ignition[694]: parsing config with SHA512: adf11de3d3471d4bd247fa0bf7f54e9ade09c07e613fc87cae6cc78281a0a493584bceaf9803de38716298b04b1be19152ba09839ce6e0c4f15bd3f58ee5bbc7 Oct 13 00:11:51.989716 unknown[694]: fetched base config from "system" Oct 13 00:11:51.989728 unknown[694]: fetched user config from "qemu" Oct 13 00:11:51.990894 ignition[694]: fetch-offline: fetch-offline passed Oct 13 00:11:51.992468 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:11:51.990974 ignition[694]: Ignition finished successfully Oct 13 00:11:51.994135 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 00:11:51.995014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 00:11:52.025199 ignition[812]: Ignition 2.22.0 Oct 13 00:11:52.025221 ignition[812]: Stage: kargs Oct 13 00:11:52.025387 ignition[812]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:11:52.025396 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:11:52.026358 ignition[812]: kargs: kargs passed Oct 13 00:11:52.026407 ignition[812]: Ignition finished successfully Oct 13 00:11:52.030604 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 00:11:52.032563 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 00:11:52.064388 ignition[820]: Ignition 2.22.0 Oct 13 00:11:52.064406 ignition[820]: Stage: disks Oct 13 00:11:52.064553 ignition[820]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:11:52.064561 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:11:52.065365 ignition[820]: disks: disks passed Oct 13 00:11:52.067421 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 00:11:52.065412 ignition[820]: Ignition finished successfully Oct 13 00:11:52.069002 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 00:11:52.070074 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 00:11:52.071512 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:11:52.072726 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:11:52.074160 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:11:52.076537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 00:11:52.108788 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 00:11:52.112990 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 00:11:52.114971 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 00:11:52.188907 kernel: EXT4-fs (vda9): mounted filesystem 1aa1d0b4-cbac-4728-b9e0-662fa574e9ad r/w with ordered data mode. Quota mode: none. Oct 13 00:11:52.190598 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 00:11:52.192479 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 00:11:52.196339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:11:52.201851 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 00:11:52.202653 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 00:11:52.202696 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 00:11:52.202733 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:11:52.218191 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 00:11:52.220394 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 00:11:52.231542 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Oct 13 00:11:52.234005 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:11:52.234055 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:11:52.239441 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:11:52.239496 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:11:52.242645 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:11:52.262525 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 00:11:52.267506 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Oct 13 00:11:52.273164 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 00:11:52.278127 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 00:11:52.355877 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 00:11:52.357965 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 00:11:52.360113 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 00:11:52.385246 kernel: BTRFS info (device vda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:11:52.407461 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 00:11:52.421632 ignition[954]: INFO : Ignition 2.22.0 Oct 13 00:11:52.421632 ignition[954]: INFO : Stage: mount Oct 13 00:11:52.422824 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:11:52.422824 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:11:52.422824 ignition[954]: INFO : mount: mount passed Oct 13 00:11:52.422824 ignition[954]: INFO : Ignition finished successfully Oct 13 00:11:52.424388 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 00:11:52.428671 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 00:11:52.776781 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 00:11:52.778352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:11:52.811259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Oct 13 00:11:52.811334 kernel: BTRFS info (device vda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:11:52.812788 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:11:52.815480 kernel: BTRFS info (device vda6): turning on async discard Oct 13 00:11:52.815532 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 00:11:52.816899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:11:52.850369 ignition[981]: INFO : Ignition 2.22.0 Oct 13 00:11:52.850369 ignition[981]: INFO : Stage: files Oct 13 00:11:52.851688 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:11:52.851688 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:11:52.851688 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Oct 13 00:11:52.854084 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 00:11:52.854084 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 00:11:52.857376 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 00:11:52.858304 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 00:11:52.859334 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 00:11:52.858569 unknown[981]: wrote ssh authorized keys file for user: core Oct 13 00:11:52.861289 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 00:11:52.861289 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 13 00:11:52.981520 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 00:11:53.117724 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:11:53.119401 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:11:53.130783 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:11:53.130783 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:11:53.130783 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:11:53.130783 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:11:53.130783 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:11:53.130783 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 13 00:11:53.473784 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 00:11:53.981379 systemd-networkd[804]: eth0: Gained IPv6LL Oct 13 00:11:54.376838 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:11:54.376838 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 00:11:54.380217 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 00:11:54.399038 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:11:54.402332 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:11:54.403368 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 00:11:54.403368 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 00:11:54.403368 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 00:11:54.403368 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:11:54.403368 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:11:54.403368 ignition[981]: INFO : files: files passed Oct 13 00:11:54.403368 ignition[981]: INFO : Ignition finished successfully Oct 13 00:11:54.404659 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 00:11:54.407141 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 00:11:54.408710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 00:11:54.420130 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 00:11:54.420258 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 00:11:54.423030 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 00:11:54.424966 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:11:54.424966 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:11:54.427315 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:11:54.427047 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:11:54.428613 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 00:11:54.430264 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 00:11:54.470976 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 00:11:54.471071 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 00:11:54.472742 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 00:11:54.474000 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 00:11:54.475302 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 00:11:54.475961 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 00:11:54.495208 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:11:54.497201 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 00:11:54.517149 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:11:54.518153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:11:54.519970 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 00:11:54.521541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 00:11:54.521664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:11:54.523928 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 00:11:54.525729 systemd[1]: Stopped target basic.target - Basic System. Oct 13 00:11:54.527094 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 00:11:54.528567 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:11:54.530212 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 00:11:54.531974 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:11:54.533684 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 00:11:54.535305 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:11:54.537037 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 00:11:54.538834 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 00:11:54.540301 systemd[1]: Stopped target swap.target - Swaps. Oct 13 00:11:54.541683 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 00:11:54.541802 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:11:54.543923 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:11:54.545570 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:11:54.547133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 00:11:54.547247 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:11:54.548992 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 00:11:54.549103 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 00:11:54.551622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 00:11:54.551746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:11:54.553298 systemd[1]: Stopped target paths.target - Path Units. Oct 13 00:11:54.554682 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 00:11:54.554794 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:11:54.556342 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 00:11:54.557968 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 00:11:54.559280 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 00:11:54.559370 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:11:54.560931 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 00:11:54.561005 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:11:54.562997 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 00:11:54.563101 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:11:54.564509 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 00:11:54.564608 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 00:11:54.566732 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 00:11:54.568728 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 00:11:54.569408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 00:11:54.569532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:11:54.571149 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 00:11:54.571264 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:11:54.576538 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 00:11:54.581381 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 00:11:54.590270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 00:11:54.594471 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 00:11:54.594562 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 00:11:54.596510 ignition[1036]: INFO : Ignition 2.22.0 Oct 13 00:11:54.596510 ignition[1036]: INFO : Stage: umount Oct 13 00:11:54.596510 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:11:54.596510 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:11:54.599168 ignition[1036]: INFO : umount: umount passed Oct 13 00:11:54.599168 ignition[1036]: INFO : Ignition finished successfully Oct 13 00:11:54.598962 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 00:11:54.599064 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 00:11:54.600128 systemd[1]: Stopped target network.target - Network. Oct 13 00:11:54.601186 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 00:11:54.601250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 00:11:54.602433 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 00:11:54.602469 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 00:11:54.604436 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 00:11:54.604489 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 00:11:54.605228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 00:11:54.605280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 00:11:54.606744 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 00:11:54.606789 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 00:11:54.608199 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 00:11:54.609555 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 00:11:54.620813 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 00:11:54.620935 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 00:11:54.623822 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 00:11:54.624012 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 00:11:54.624090 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 00:11:54.627738 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 00:11:54.628296 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 00:11:54.629982 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 00:11:54.630016 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:11:54.632175 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 00:11:54.633427 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 00:11:54.633476 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:11:54.634960 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:11:54.634996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:11:54.637188 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 00:11:54.637227 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 00:11:54.638567 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 00:11:54.638603 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:11:54.642050 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:11:54.645137 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 00:11:54.645196 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:11:54.657941 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 00:11:54.658073 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 00:11:54.659676 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 00:11:54.659842 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:11:54.662683 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 00:11:54.662769 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 00:11:54.664148 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 00:11:54.664181 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:11:54.665631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 00:11:54.665673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:11:54.667781 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 00:11:54.667821 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 00:11:54.669759 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:11:54.669809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:11:54.672654 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 00:11:54.674157 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 00:11:54.674207 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:11:54.676616 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 00:11:54.676658 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:11:54.679024 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 00:11:54.679069 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:11:54.681547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 00:11:54.681587 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:11:54.683385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:11:54.683425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:11:54.686523 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 13 00:11:54.686574 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Oct 13 00:11:54.686600 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 13 00:11:54.686628 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:11:54.691437 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 00:11:54.691529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 00:11:54.693221 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 00:11:54.695187 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 00:11:54.712868 systemd[1]: Switching root. Oct 13 00:11:54.746366 systemd-journald[244]: Journal stopped Oct 13 00:11:55.482616 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Oct 13 00:11:55.482672 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 00:11:55.482690 kernel: SELinux: policy capability open_perms=1 Oct 13 00:11:55.482711 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 00:11:55.482721 kernel: SELinux: policy capability always_check_network=0 Oct 13 00:11:55.482730 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 00:11:55.482740 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 00:11:55.482750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 00:11:55.482759 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 00:11:55.482768 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 00:11:55.482778 kernel: audit: type=1403 audit(1760314314.923:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 00:11:55.482790 systemd[1]: Successfully loaded SELinux policy in 59.866ms. Oct 13 00:11:55.482808 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.654ms. Oct 13 00:11:55.482819 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:11:55.482829 systemd[1]: Detected virtualization kvm. Oct 13 00:11:55.482842 systemd[1]: Detected architecture arm64. Oct 13 00:11:55.482852 systemd[1]: Detected first boot. Oct 13 00:11:55.482862 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:11:55.482872 zram_generator::config[1082]: No configuration found. Oct 13 00:11:55.482883 kernel: NET: Registered PF_VSOCK protocol family Oct 13 00:11:55.482893 systemd[1]: Populated /etc with preset unit settings. Oct 13 00:11:55.482903 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 00:11:55.482913 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 00:11:55.482923 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 00:11:55.482933 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 00:11:55.482943 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 00:11:55.482956 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 00:11:55.482966 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 00:11:55.482977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 00:11:55.482987 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 00:11:55.482997 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 00:11:55.483006 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 00:11:55.483016 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 00:11:55.483026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:11:55.483036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:11:55.483046 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 00:11:55.483057 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 00:11:55.483067 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 00:11:55.483077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:11:55.483086 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 00:11:55.483096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:11:55.483107 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:11:55.483117 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 00:11:55.483127 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 00:11:55.483138 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 00:11:55.483148 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 00:11:55.483158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:11:55.483168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:11:55.483177 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:11:55.483187 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:11:55.483197 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 00:11:55.483207 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 00:11:55.483217 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 00:11:55.483228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:11:55.483248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:11:55.483258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:11:55.483268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 00:11:55.483278 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 00:11:55.483287 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 00:11:55.483298 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 00:11:55.483307 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 00:11:55.483318 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 00:11:55.483329 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 00:11:55.483340 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 00:11:55.483350 systemd[1]: Reached target machines.target - Containers. Oct 13 00:11:55.483360 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 00:11:55.483370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:11:55.483380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:11:55.483389 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 00:11:55.483399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:11:55.483410 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:11:55.483421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:11:55.483430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 00:11:55.483440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:11:55.483451 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 00:11:55.483461 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 00:11:55.483471 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 00:11:55.483481 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 00:11:55.483490 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 00:11:55.483502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:11:55.483511 kernel: loop: module loaded Oct 13 00:11:55.483521 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:11:55.483530 kernel: fuse: init (API version 7.41) Oct 13 00:11:55.483540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:11:55.483550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:11:55.483560 kernel: ACPI: bus type drm_connector registered Oct 13 00:11:55.483571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 00:11:55.483581 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 00:11:55.483593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:11:55.483603 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 00:11:55.483613 systemd[1]: Stopped verity-setup.service. Oct 13 00:11:55.483623 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 00:11:55.483632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 00:11:55.483644 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 00:11:55.483656 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 00:11:55.483666 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 00:11:55.483706 systemd-journald[1157]: Collecting audit messages is disabled. Oct 13 00:11:55.483728 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 00:11:55.483739 systemd-journald[1157]: Journal started Oct 13 00:11:55.483759 systemd-journald[1157]: Runtime Journal (/run/log/journal/56eaa0b13cc648ed946ef3535421daa0) is 6M, max 48.5M, 42.4M free. Oct 13 00:11:55.286209 systemd[1]: Queued start job for default target multi-user.target. Oct 13 00:11:55.301258 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 00:11:55.301657 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 00:11:55.485787 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 00:11:55.487286 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:11:55.488068 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:11:55.489361 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 00:11:55.489526 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 00:11:55.490624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:11:55.490797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:11:55.491945 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:11:55.492120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:11:55.493203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:11:55.493403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:11:55.494589 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 00:11:55.494752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 00:11:55.495964 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:11:55.496128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:11:55.497302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:11:55.498388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:11:55.499740 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 00:11:55.501034 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 00:11:55.512460 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:11:55.514512 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 00:11:55.516326 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 00:11:55.517158 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 00:11:55.517189 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:11:55.518941 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 00:11:55.524180 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 00:11:55.525190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:11:55.526331 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 00:11:55.528112 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 00:11:55.529251 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:11:55.532369 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 00:11:55.533434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:11:55.534553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:11:55.538588 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 00:11:55.540261 systemd-journald[1157]: Time spent on flushing to /var/log/journal/56eaa0b13cc648ed946ef3535421daa0 is 20.138ms for 891 entries. Oct 13 00:11:55.540261 systemd-journald[1157]: System Journal (/var/log/journal/56eaa0b13cc648ed946ef3535421daa0) is 8M, max 195.6M, 187.6M free. Oct 13 00:11:55.572088 systemd-journald[1157]: Received client request to flush runtime journal. Oct 13 00:11:55.572143 kernel: loop0: detected capacity change from 0 to 207008 Oct 13 00:11:55.544450 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:11:55.549394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:11:55.551194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 00:11:55.553106 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 00:11:55.563981 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 00:11:55.565439 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 00:11:55.567950 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 00:11:55.570990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:11:55.574452 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 13 00:11:55.574490 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 13 00:11:55.579636 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 00:11:55.581217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:11:55.588736 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 00:11:55.589439 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 00:11:55.606204 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 00:11:55.623433 kernel: loop1: detected capacity change from 0 to 100632 Oct 13 00:11:55.628304 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 00:11:55.630416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:11:55.648856 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Oct 13 00:11:55.648877 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Oct 13 00:11:55.652979 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:11:55.668276 kernel: loop2: detected capacity change from 0 to 119368 Oct 13 00:11:55.707268 kernel: loop3: detected capacity change from 0 to 207008 Oct 13 00:11:55.715279 kernel: loop4: detected capacity change from 0 to 100632 Oct 13 00:11:55.720269 kernel: loop5: detected capacity change from 0 to 119368 Oct 13 00:11:55.723772 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 00:11:55.724150 (sd-merge)[1227]: Merged extensions into '/usr'. Oct 13 00:11:55.728131 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 00:11:55.728147 systemd[1]: Reloading... Oct 13 00:11:55.783391 zram_generator::config[1252]: No configuration found. Oct 13 00:11:55.810093 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 00:11:55.927278 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 00:11:55.927453 systemd[1]: Reloading finished in 198 ms. Oct 13 00:11:55.944044 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 00:11:55.945449 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 00:11:55.960552 systemd[1]: Starting ensure-sysext.service... Oct 13 00:11:55.962352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:11:55.971305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 00:11:55.974336 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Oct 13 00:11:55.974457 systemd[1]: Reloading... Oct 13 00:11:55.975848 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 00:11:55.975878 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 00:11:55.976085 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 00:11:55.976291 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 00:11:55.976895 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 00:11:55.977088 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Oct 13 00:11:55.977129 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Oct 13 00:11:55.979885 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:11:55.979900 systemd-tmpfiles[1289]: Skipping /boot Oct 13 00:11:55.985799 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:11:55.985815 systemd-tmpfiles[1289]: Skipping /boot Oct 13 00:11:56.015262 zram_generator::config[1316]: No configuration found. Oct 13 00:11:56.149536 systemd[1]: Reloading finished in 174 ms. Oct 13 00:11:56.170274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:11:56.186282 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:11:56.188484 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 00:11:56.190458 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 00:11:56.193398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:11:56.196814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:11:56.198927 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 00:11:56.203902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:11:56.205389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:11:56.215363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:11:56.217566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:11:56.219376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:11:56.219504 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:11:56.220432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:11:56.220624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:11:56.226756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:11:56.226942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:11:56.229014 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 00:11:56.230646 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:11:56.230802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:11:56.236777 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 00:11:56.241186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:11:56.242573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:11:56.245513 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:11:56.249253 augenrules[1386]: No rules Oct 13 00:11:56.250639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:11:56.253397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:11:56.254783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:11:56.254913 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:11:56.256547 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 00:11:56.256796 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Oct 13 00:11:56.260720 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 00:11:56.263464 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:11:56.263853 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:11:56.266291 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 00:11:56.267896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:11:56.275552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:11:56.277330 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:11:56.277525 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:11:56.279361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:11:56.279591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:11:56.281384 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:11:56.281538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:11:56.282920 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 00:11:56.287727 systemd[1]: Finished ensure-sysext.service. Oct 13 00:11:56.288846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:11:56.296932 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:11:56.297805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:11:56.297875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:11:56.301496 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 00:11:56.302440 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:11:56.330962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 00:11:56.367101 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 00:11:56.403047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:11:56.405871 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 00:11:56.436899 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 00:11:56.462182 systemd-networkd[1429]: lo: Link UP Oct 13 00:11:56.462523 systemd-networkd[1429]: lo: Gained carrier Oct 13 00:11:56.463405 systemd-networkd[1429]: Enumeration completed Oct 13 00:11:56.463574 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:11:56.464007 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:11:56.464089 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:11:56.464697 systemd-networkd[1429]: eth0: Link UP Oct 13 00:11:56.464884 systemd-networkd[1429]: eth0: Gained carrier Oct 13 00:11:56.464959 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:11:56.467479 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 00:11:56.471342 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 00:11:56.480310 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:11:56.485035 systemd-resolved[1354]: Positive Trust Anchors: Oct 13 00:11:56.485054 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:11:56.485085 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:11:56.492324 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 00:11:56.492607 systemd-resolved[1354]: Defaulting to hostname 'linux'. Oct 13 00:11:56.496303 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:11:56.497303 systemd[1]: Reached target network.target - Network. Oct 13 00:11:56.499293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:11:56.512253 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 00:11:57.002213 systemd-resolved[1354]: Clock change detected. Flushing caches. Oct 13 00:11:57.002236 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 00:11:57.002277 systemd-timesyncd[1430]: Initial clock synchronization to Mon 2025-10-13 00:11:57.002162 UTC. Oct 13 00:11:57.002546 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:11:57.003822 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 00:11:57.004761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 00:11:57.006043 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 00:11:57.007599 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 00:11:57.007637 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:11:57.008277 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 00:11:57.009528 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 00:11:57.010791 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 00:11:57.012211 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:11:57.013747 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 00:11:57.016346 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 00:11:57.019706 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 00:11:57.020749 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 00:11:57.021648 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 00:11:57.026187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 00:11:57.028265 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 00:11:57.029750 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 00:11:57.037084 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:11:57.037879 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:11:57.038621 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:11:57.038650 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:11:57.039561 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 00:11:57.041253 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 00:11:57.042887 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 00:11:57.054200 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 00:11:57.055873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 00:11:57.056621 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 00:11:57.057531 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 00:11:57.059567 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 00:11:57.061239 jq[1473]: false Oct 13 00:11:57.062723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 00:11:57.065342 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 00:11:57.068239 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 00:11:57.069872 extend-filesystems[1474]: Found /dev/vda6 Oct 13 00:11:57.071368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:11:57.073588 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 00:11:57.075049 extend-filesystems[1474]: Found /dev/vda9 Oct 13 00:11:57.073959 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 00:11:57.074891 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 00:11:57.077583 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 00:11:57.079976 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 00:11:57.080139 extend-filesystems[1474]: Checking size of /dev/vda9 Oct 13 00:11:57.081256 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 00:11:57.082248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 00:11:57.084226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 00:11:57.084378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 00:11:57.089479 jq[1492]: true Oct 13 00:11:57.094988 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 00:11:57.095536 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 00:11:57.103353 extend-filesystems[1474]: Resized partition /dev/vda9 Oct 13 00:11:57.107234 extend-filesystems[1515]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 00:11:57.110477 tar[1498]: linux-arm64/LICENSE Oct 13 00:11:57.110477 tar[1498]: linux-arm64/helm Oct 13 00:11:57.114849 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 00:11:57.113528 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 00:11:57.115088 update_engine[1491]: I20251013 00:11:57.114676 1491 main.cc:92] Flatcar Update Engine starting Oct 13 00:11:57.117297 jq[1506]: true Oct 13 00:11:57.126334 dbus-daemon[1471]: [system] SELinux support is enabled Oct 13 00:11:57.126504 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 00:11:57.131444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 00:11:57.131500 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 00:11:57.132514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 00:11:57.132538 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 00:11:57.134353 systemd[1]: Started update-engine.service - Update Engine. Oct 13 00:11:57.136300 update_engine[1491]: I20251013 00:11:57.134722 1491 update_check_scheduler.cc:74] Next update check in 9m10s Oct 13 00:11:57.136697 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 00:11:57.154217 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 00:11:57.154434 systemd-logind[1484]: New seat seat0. Oct 13 00:11:57.155066 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 00:11:57.157480 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 00:11:57.161557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:11:57.171507 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 00:11:57.171507 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 00:11:57.171507 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 00:11:57.174635 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Oct 13 00:11:57.175641 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 00:11:57.175844 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 00:11:57.185747 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:11:57.187512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 00:11:57.204045 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 00:11:57.208808 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 00:11:57.291934 containerd[1512]: time="2025-10-13T00:11:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 00:11:57.292758 containerd[1512]: time="2025-10-13T00:11:57.292723025Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 00:11:57.304218 containerd[1512]: time="2025-10-13T00:11:57.304158385Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.24µs" Oct 13 00:11:57.304218 containerd[1512]: time="2025-10-13T00:11:57.304203265Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 00:11:57.304324 containerd[1512]: time="2025-10-13T00:11:57.304272025Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 00:11:57.304712 containerd[1512]: time="2025-10-13T00:11:57.304637745Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 00:11:57.304761 containerd[1512]: time="2025-10-13T00:11:57.304707665Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 00:11:57.304796 containerd[1512]: time="2025-10-13T00:11:57.304781545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:11:57.305035 containerd[1512]: time="2025-10-13T00:11:57.304997305Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:11:57.305035 containerd[1512]: time="2025-10-13T00:11:57.305025785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:11:57.305790 containerd[1512]: time="2025-10-13T00:11:57.305709585Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:11:57.305840 containerd[1512]: time="2025-10-13T00:11:57.305781465Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:11:57.305864 containerd[1512]: time="2025-10-13T00:11:57.305845785Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:11:57.305864 containerd[1512]: time="2025-10-13T00:11:57.305858745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 00:11:57.306162 containerd[1512]: time="2025-10-13T00:11:57.306128185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 00:11:57.306591 containerd[1512]: time="2025-10-13T00:11:57.306532625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:11:57.306677 containerd[1512]: time="2025-10-13T00:11:57.306614865Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:11:57.306731 containerd[1512]: time="2025-10-13T00:11:57.306678745Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 00:11:57.306860 containerd[1512]: time="2025-10-13T00:11:57.306836865Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 00:11:57.307344 containerd[1512]: time="2025-10-13T00:11:57.307322865Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 00:11:57.307481 containerd[1512]: time="2025-10-13T00:11:57.307447465Z" level=info msg="metadata content store policy set" policy=shared Oct 13 00:11:57.311128 containerd[1512]: time="2025-10-13T00:11:57.311077905Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 00:11:57.311253 containerd[1512]: time="2025-10-13T00:11:57.311223465Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 00:11:57.311278 containerd[1512]: time="2025-10-13T00:11:57.311251025Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 00:11:57.311421 containerd[1512]: time="2025-10-13T00:11:57.311390985Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 00:11:57.311478 containerd[1512]: time="2025-10-13T00:11:57.311428345Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 00:11:57.311553 containerd[1512]: time="2025-10-13T00:11:57.311446945Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 00:11:57.311574 containerd[1512]: time="2025-10-13T00:11:57.311559905Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 00:11:57.311632 containerd[1512]: time="2025-10-13T00:11:57.311616865Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 00:11:57.311690 containerd[1512]: time="2025-10-13T00:11:57.311641185Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 00:11:57.311710 containerd[1512]: time="2025-10-13T00:11:57.311697145Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 00:11:57.311734 containerd[1512]: time="2025-10-13T00:11:57.311710825Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 00:11:57.311734 containerd[1512]: time="2025-10-13T00:11:57.311725865Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 00:11:57.312079 containerd[1512]: time="2025-10-13T00:11:57.312043185Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 00:11:57.312172 containerd[1512]: time="2025-10-13T00:11:57.312155545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 00:11:57.312199 containerd[1512]: time="2025-10-13T00:11:57.312185425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 00:11:57.312217 containerd[1512]: time="2025-10-13T00:11:57.312199705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 00:11:57.312271 containerd[1512]: time="2025-10-13T00:11:57.312211625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 00:11:57.312297 containerd[1512]: time="2025-10-13T00:11:57.312278145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 00:11:57.312297 containerd[1512]: time="2025-10-13T00:11:57.312293865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 00:11:57.312334 containerd[1512]: time="2025-10-13T00:11:57.312304505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 00:11:57.312334 containerd[1512]: time="2025-10-13T00:11:57.312316945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 00:11:57.312334 containerd[1512]: time="2025-10-13T00:11:57.312331585Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 00:11:57.312391 containerd[1512]: time="2025-10-13T00:11:57.312342545Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 00:11:57.312784 containerd[1512]: time="2025-10-13T00:11:57.312750985Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 00:11:57.312784 containerd[1512]: time="2025-10-13T00:11:57.312781185Z" level=info msg="Start snapshots syncer" Oct 13 00:11:57.312932 containerd[1512]: time="2025-10-13T00:11:57.312902025Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 00:11:57.314481 containerd[1512]: time="2025-10-13T00:11:57.314251105Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 00:11:57.314481 containerd[1512]: time="2025-10-13T00:11:57.314336105Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 00:11:57.314645 containerd[1512]: time="2025-10-13T00:11:57.314434825Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 00:11:57.314938 containerd[1512]: time="2025-10-13T00:11:57.314890585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 00:11:57.315023 containerd[1512]: time="2025-10-13T00:11:57.315008345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 00:11:57.315076 containerd[1512]: time="2025-10-13T00:11:57.315060945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 00:11:57.315158 containerd[1512]: time="2025-10-13T00:11:57.315130225Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 00:11:57.315217 containerd[1512]: time="2025-10-13T00:11:57.315203385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 00:11:57.315272 containerd[1512]: time="2025-10-13T00:11:57.315257705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 00:11:57.315326 containerd[1512]: time="2025-10-13T00:11:57.315310825Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 00:11:57.315406 containerd[1512]: time="2025-10-13T00:11:57.315389945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 00:11:57.315484 containerd[1512]: time="2025-10-13T00:11:57.315447065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 00:11:57.315580 containerd[1512]: time="2025-10-13T00:11:57.315563785Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 00:11:57.315725 containerd[1512]: time="2025-10-13T00:11:57.315707385Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:11:57.315789 containerd[1512]: time="2025-10-13T00:11:57.315773665Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:11:57.315834 containerd[1512]: time="2025-10-13T00:11:57.315821905Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:11:57.315883 containerd[1512]: time="2025-10-13T00:11:57.315869825Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:11:57.315941 containerd[1512]: time="2025-10-13T00:11:57.315928945Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 00:11:57.315990 containerd[1512]: time="2025-10-13T00:11:57.315977785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 00:11:57.316047 containerd[1512]: time="2025-10-13T00:11:57.316034305Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 00:11:57.316163 containerd[1512]: time="2025-10-13T00:11:57.316152305Z" level=info msg="runtime interface created" Oct 13 00:11:57.316205 containerd[1512]: time="2025-10-13T00:11:57.316193665Z" level=info msg="created NRI interface" Oct 13 00:11:57.316260 containerd[1512]: time="2025-10-13T00:11:57.316246745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 00:11:57.316313 containerd[1512]: time="2025-10-13T00:11:57.316301385Z" level=info msg="Connect containerd service" Oct 13 00:11:57.316389 containerd[1512]: time="2025-10-13T00:11:57.316375745Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 00:11:57.317249 containerd[1512]: time="2025-10-13T00:11:57.317216265Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:11:57.386358 containerd[1512]: time="2025-10-13T00:11:57.386298745Z" level=info msg="Start subscribing containerd event" Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386525905Z" level=info msg="Start recovering state" Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386620505Z" level=info msg="Start event monitor" Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386640185Z" level=info msg="Start cni network conf syncer for default" Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386651065Z" level=info msg="Start streaming server" Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386659905Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386667785Z" level=info msg="runtime interface starting up..." Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386673625Z" level=info msg="starting plugins..." Oct 13 00:11:57.386986 containerd[1512]: time="2025-10-13T00:11:57.386686585Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 00:11:57.387742 containerd[1512]: time="2025-10-13T00:11:57.387676345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 00:11:57.388128 containerd[1512]: time="2025-10-13T00:11:57.387976665Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 00:11:57.388343 containerd[1512]: time="2025-10-13T00:11:57.388326025Z" level=info msg="containerd successfully booted in 0.096742s" Oct 13 00:11:57.388363 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 00:11:57.460848 tar[1498]: linux-arm64/README.md Oct 13 00:11:57.480757 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 00:11:57.830515 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 00:11:57.851549 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 00:11:57.854055 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 00:11:57.869174 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 00:11:57.869426 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 00:11:57.872854 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 00:11:57.898609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 00:11:57.901079 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 00:11:57.904026 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 00:11:57.905375 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 00:11:58.757620 systemd-networkd[1429]: eth0: Gained IPv6LL Oct 13 00:11:58.760089 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 00:11:58.762887 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 00:11:58.765065 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 00:11:58.767272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:11:58.769374 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 00:11:58.788605 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 00:11:58.788896 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 00:11:58.791554 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 00:11:58.793641 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 00:11:59.326147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:11:59.327575 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 00:11:59.329654 systemd[1]: Startup finished in 2.002s (kernel) + 5.310s (initrd) + 3.978s (userspace) = 11.291s. Oct 13 00:11:59.330279 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:11:59.670980 kubelet[1610]: E1013 00:11:59.670851 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:11:59.673512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:11:59.673753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:11:59.674061 systemd[1]: kubelet.service: Consumed 748ms CPU time, 257.3M memory peak. Oct 13 00:12:02.620223 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 00:12:02.621336 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:47186.service - OpenSSH per-connection server daemon (10.0.0.1:47186). Oct 13 00:12:02.705558 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 47186 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:02.707351 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:02.713439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 00:12:02.714379 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 00:12:02.719667 systemd-logind[1484]: New session 1 of user core. Oct 13 00:12:02.737523 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 00:12:02.742102 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 00:12:02.761521 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 00:12:02.763912 systemd-logind[1484]: New session c1 of user core. Oct 13 00:12:02.872235 systemd[1629]: Queued start job for default target default.target. Oct 13 00:12:02.888500 systemd[1629]: Created slice app.slice - User Application Slice. Oct 13 00:12:02.888528 systemd[1629]: Reached target paths.target - Paths. Oct 13 00:12:02.888570 systemd[1629]: Reached target timers.target - Timers. Oct 13 00:12:02.889850 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 00:12:02.900118 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 00:12:02.900192 systemd[1629]: Reached target sockets.target - Sockets. Oct 13 00:12:02.900237 systemd[1629]: Reached target basic.target - Basic System. Oct 13 00:12:02.900269 systemd[1629]: Reached target default.target - Main User Target. Oct 13 00:12:02.900295 systemd[1629]: Startup finished in 130ms. Oct 13 00:12:02.900561 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 00:12:02.902136 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 00:12:02.969193 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:47244.service - OpenSSH per-connection server daemon (10.0.0.1:47244). Oct 13 00:12:03.017061 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 47244 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:03.018320 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:03.022053 systemd-logind[1484]: New session 2 of user core. Oct 13 00:12:03.036675 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 00:12:03.087317 sshd[1643]: Connection closed by 10.0.0.1 port 47244 Oct 13 00:12:03.087201 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Oct 13 00:12:03.101239 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:47244.service: Deactivated successfully. Oct 13 00:12:03.103609 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 00:12:03.104211 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Oct 13 00:12:03.106161 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:47256.service - OpenSSH per-connection server daemon (10.0.0.1:47256). Oct 13 00:12:03.106600 systemd-logind[1484]: Removed session 2. Oct 13 00:12:03.154904 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 47256 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:03.156640 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:03.160338 systemd-logind[1484]: New session 3 of user core. Oct 13 00:12:03.169579 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 00:12:03.216241 sshd[1652]: Connection closed by 10.0.0.1 port 47256 Oct 13 00:12:03.216690 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Oct 13 00:12:03.228446 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:47256.service: Deactivated successfully. Oct 13 00:12:03.230812 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 00:12:03.231508 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Oct 13 00:12:03.233731 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:47268.service - OpenSSH per-connection server daemon (10.0.0.1:47268). Oct 13 00:12:03.234783 systemd-logind[1484]: Removed session 3. Oct 13 00:12:03.289008 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 47268 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:03.290276 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:03.294181 systemd-logind[1484]: New session 4 of user core. Oct 13 00:12:03.315668 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 00:12:03.368116 sshd[1661]: Connection closed by 10.0.0.1 port 47268 Oct 13 00:12:03.368512 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Oct 13 00:12:03.378344 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:47268.service: Deactivated successfully. Oct 13 00:12:03.379912 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 00:12:03.382675 systemd-logind[1484]: Session 4 logged out. Waiting for processes to exit. Oct 13 00:12:03.384858 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:47278.service - OpenSSH per-connection server daemon (10.0.0.1:47278). Oct 13 00:12:03.385519 systemd-logind[1484]: Removed session 4. Oct 13 00:12:03.441736 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 47278 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:03.443615 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:03.447496 systemd-logind[1484]: New session 5 of user core. Oct 13 00:12:03.455627 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 00:12:03.511115 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 00:12:03.511373 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:12:03.522270 sudo[1671]: pam_unix(sudo:session): session closed for user root Oct 13 00:12:03.523687 sshd[1670]: Connection closed by 10.0.0.1 port 47278 Oct 13 00:12:03.524064 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Oct 13 00:12:03.537368 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:47278.service: Deactivated successfully. Oct 13 00:12:03.538711 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 00:12:03.539309 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Oct 13 00:12:03.541627 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:47290.service - OpenSSH per-connection server daemon (10.0.0.1:47290). Oct 13 00:12:03.542157 systemd-logind[1484]: Removed session 5. Oct 13 00:12:03.591260 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 47290 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:03.592605 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:03.597240 systemd-logind[1484]: New session 6 of user core. Oct 13 00:12:03.606649 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 00:12:03.657255 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 00:12:03.657833 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:12:03.662791 sudo[1682]: pam_unix(sudo:session): session closed for user root Oct 13 00:12:03.667090 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 00:12:03.667576 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:12:03.676485 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:12:03.714540 augenrules[1704]: No rules Oct 13 00:12:03.715592 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:12:03.717505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:12:03.718270 sudo[1681]: pam_unix(sudo:session): session closed for user root Oct 13 00:12:03.719501 sshd[1680]: Connection closed by 10.0.0.1 port 47290 Oct 13 00:12:03.719777 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Oct 13 00:12:03.728187 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:47290.service: Deactivated successfully. Oct 13 00:12:03.729937 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 00:12:03.730578 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Oct 13 00:12:03.733274 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:47296.service - OpenSSH per-connection server daemon (10.0.0.1:47296). Oct 13 00:12:03.733922 systemd-logind[1484]: Removed session 6. Oct 13 00:12:03.784651 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 47296 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:12:03.785915 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:12:03.789617 systemd-logind[1484]: New session 7 of user core. Oct 13 00:12:03.801667 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 00:12:03.850995 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 00:12:03.851248 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:12:04.117287 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 00:12:04.127841 (dockerd)[1739]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 00:12:04.319605 dockerd[1739]: time="2025-10-13T00:12:04.319534505Z" level=info msg="Starting up" Oct 13 00:12:04.320314 dockerd[1739]: time="2025-10-13T00:12:04.320289985Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 00:12:04.330539 dockerd[1739]: time="2025-10-13T00:12:04.330506745Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 00:12:04.436132 dockerd[1739]: time="2025-10-13T00:12:04.436041825Z" level=info msg="Loading containers: start." Oct 13 00:12:04.444481 kernel: Initializing XFRM netlink socket Oct 13 00:12:04.624501 systemd-networkd[1429]: docker0: Link UP Oct 13 00:12:04.627598 dockerd[1739]: time="2025-10-13T00:12:04.627432985Z" level=info msg="Loading containers: done." Oct 13 00:12:04.638955 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3913417730-merged.mount: Deactivated successfully. Oct 13 00:12:04.641816 dockerd[1739]: time="2025-10-13T00:12:04.641765865Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 00:12:04.641898 dockerd[1739]: time="2025-10-13T00:12:04.641845665Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 00:12:04.641955 dockerd[1739]: time="2025-10-13T00:12:04.641936825Z" level=info msg="Initializing buildkit" Oct 13 00:12:04.661690 dockerd[1739]: time="2025-10-13T00:12:04.661645385Z" level=info msg="Completed buildkit initialization" Oct 13 00:12:04.668434 dockerd[1739]: time="2025-10-13T00:12:04.667974145Z" level=info msg="Daemon has completed initialization" Oct 13 00:12:04.668434 dockerd[1739]: time="2025-10-13T00:12:04.668036625Z" level=info msg="API listen on /run/docker.sock" Oct 13 00:12:04.668160 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 00:12:05.212562 containerd[1512]: time="2025-10-13T00:12:05.212511305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 00:12:05.806677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195043309.mount: Deactivated successfully. Oct 13 00:12:06.723604 containerd[1512]: time="2025-10-13T00:12:06.722561145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:06.723604 containerd[1512]: time="2025-10-13T00:12:06.723398345Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Oct 13 00:12:06.724192 containerd[1512]: time="2025-10-13T00:12:06.724162585Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:06.727019 containerd[1512]: time="2025-10-13T00:12:06.726994585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:06.727983 containerd[1512]: time="2025-10-13T00:12:06.727943225Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.51539076s" Oct 13 00:12:06.727983 containerd[1512]: time="2025-10-13T00:12:06.727980905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 13 00:12:06.728729 containerd[1512]: time="2025-10-13T00:12:06.728699225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 00:12:07.783894 containerd[1512]: time="2025-10-13T00:12:07.783826225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:07.784482 containerd[1512]: time="2025-10-13T00:12:07.784424585Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Oct 13 00:12:07.785337 containerd[1512]: time="2025-10-13T00:12:07.785313465Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:07.788348 containerd[1512]: time="2025-10-13T00:12:07.788299505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:07.790109 containerd[1512]: time="2025-10-13T00:12:07.790034185Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.06129876s" Oct 13 00:12:07.790109 containerd[1512]: time="2025-10-13T00:12:07.790075185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 13 00:12:07.790628 containerd[1512]: time="2025-10-13T00:12:07.790600825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 00:12:08.873005 containerd[1512]: time="2025-10-13T00:12:08.872958185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:08.873926 containerd[1512]: time="2025-10-13T00:12:08.873537545Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Oct 13 00:12:08.874682 containerd[1512]: time="2025-10-13T00:12:08.874646985Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:08.877687 containerd[1512]: time="2025-10-13T00:12:08.877660105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:08.878597 containerd[1512]: time="2025-10-13T00:12:08.878564105Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.08793116s" Oct 13 00:12:08.878597 containerd[1512]: time="2025-10-13T00:12:08.878597065Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 13 00:12:08.879015 containerd[1512]: time="2025-10-13T00:12:08.878990425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 00:12:09.779537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007336567.mount: Deactivated successfully. Oct 13 00:12:09.780526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 00:12:09.782627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:12:09.907568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:09.924744 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:12:09.961019 kubelet[2041]: E1013 00:12:09.960957 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:12:09.964205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:12:09.964334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:12:09.965558 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.6M memory peak. Oct 13 00:12:10.137610 containerd[1512]: time="2025-10-13T00:12:10.137476185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:10.138136 containerd[1512]: time="2025-10-13T00:12:10.138086425Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Oct 13 00:12:10.139075 containerd[1512]: time="2025-10-13T00:12:10.139041505Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:10.140723 containerd[1512]: time="2025-10-13T00:12:10.140690705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:10.141666 containerd[1512]: time="2025-10-13T00:12:10.141519465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.26249852s" Oct 13 00:12:10.141666 containerd[1512]: time="2025-10-13T00:12:10.141556745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 13 00:12:10.142083 containerd[1512]: time="2025-10-13T00:12:10.142044625Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 00:12:10.660724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298935764.mount: Deactivated successfully. Oct 13 00:12:11.482576 containerd[1512]: time="2025-10-13T00:12:11.482512785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:11.483054 containerd[1512]: time="2025-10-13T00:12:11.483003545Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Oct 13 00:12:11.484639 containerd[1512]: time="2025-10-13T00:12:11.484584465Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:11.517482 containerd[1512]: time="2025-10-13T00:12:11.517402545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:11.518488 containerd[1512]: time="2025-10-13T00:12:11.518460425Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3762712s" Oct 13 00:12:11.518539 containerd[1512]: time="2025-10-13T00:12:11.518494945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 13 00:12:11.518922 containerd[1512]: time="2025-10-13T00:12:11.518898065Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 00:12:11.948623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919210808.mount: Deactivated successfully. Oct 13 00:12:11.954299 containerd[1512]: time="2025-10-13T00:12:11.953664185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:12:11.954299 containerd[1512]: time="2025-10-13T00:12:11.954272185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 13 00:12:11.954850 containerd[1512]: time="2025-10-13T00:12:11.954825985Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:12:11.957293 containerd[1512]: time="2025-10-13T00:12:11.957260665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:12:11.957919 containerd[1512]: time="2025-10-13T00:12:11.957896345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 438.96768ms" Oct 13 00:12:11.957974 containerd[1512]: time="2025-10-13T00:12:11.957927145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 13 00:12:11.958481 containerd[1512]: time="2025-10-13T00:12:11.958345185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 00:12:12.498318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309638534.mount: Deactivated successfully. Oct 13 00:12:14.113708 containerd[1512]: time="2025-10-13T00:12:14.113640865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:14.114276 containerd[1512]: time="2025-10-13T00:12:14.114230505Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Oct 13 00:12:14.115150 containerd[1512]: time="2025-10-13T00:12:14.115113785Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:14.118386 containerd[1512]: time="2025-10-13T00:12:14.118352905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:14.119518 containerd[1512]: time="2025-10-13T00:12:14.119481665Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.1611096s" Oct 13 00:12:14.119574 containerd[1512]: time="2025-10-13T00:12:14.119517745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 13 00:12:18.538237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:18.538766 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.6M memory peak. Oct 13 00:12:18.540499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:12:18.560599 systemd[1]: Reload requested from client PID 2188 ('systemctl') (unit session-7.scope)... Oct 13 00:12:18.560734 systemd[1]: Reloading... Oct 13 00:12:18.627509 zram_generator::config[2232]: No configuration found. Oct 13 00:12:18.787098 systemd[1]: Reloading finished in 226 ms. Oct 13 00:12:18.842427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:18.844106 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:12:18.846387 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:12:18.846733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:18.848548 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95.2M memory peak. Oct 13 00:12:18.850005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:12:18.986350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:18.990793 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:12:19.023605 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:12:19.023605 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:12:19.023605 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:12:19.023931 kubelet[2279]: I1013 00:12:19.023660 2279 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:12:19.949309 kubelet[2279]: I1013 00:12:19.949254 2279 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 00:12:19.949309 kubelet[2279]: I1013 00:12:19.949290 2279 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:12:19.949606 kubelet[2279]: I1013 00:12:19.949579 2279 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 00:12:19.970812 kubelet[2279]: E1013 00:12:19.970764 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:12:19.972008 kubelet[2279]: I1013 00:12:19.971985 2279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:12:19.977507 kubelet[2279]: I1013 00:12:19.977486 2279 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:12:19.980069 kubelet[2279]: I1013 00:12:19.980052 2279 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 00:12:19.980705 kubelet[2279]: I1013 00:12:19.980659 2279 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:12:19.980861 kubelet[2279]: I1013 00:12:19.980697 2279 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:12:19.980952 kubelet[2279]: I1013 00:12:19.980928 2279 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:12:19.980952 kubelet[2279]: I1013 00:12:19.980938 2279 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 00:12:19.981239 kubelet[2279]: I1013 00:12:19.981222 2279 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:12:19.983593 kubelet[2279]: I1013 00:12:19.983576 2279 kubelet.go:446] "Attempting to sync node with API server" Oct 13 00:12:19.983652 kubelet[2279]: I1013 00:12:19.983598 2279 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:12:19.983652 kubelet[2279]: I1013 00:12:19.983621 2279 kubelet.go:352] "Adding apiserver pod source" Oct 13 00:12:19.983652 kubelet[2279]: I1013 00:12:19.983631 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:12:19.985742 kubelet[2279]: I1013 00:12:19.985696 2279 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:12:19.986463 kubelet[2279]: I1013 00:12:19.986246 2279 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 00:12:19.986463 kubelet[2279]: W1013 00:12:19.986407 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 00:12:19.986974 kubelet[2279]: W1013 00:12:19.986931 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Oct 13 00:12:19.987530 kubelet[2279]: I1013 00:12:19.987208 2279 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 00:12:19.987594 kubelet[2279]: I1013 00:12:19.987556 2279 server.go:1287] "Started kubelet" Oct 13 00:12:19.987637 kubelet[2279]: E1013 00:12:19.987513 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:12:19.987683 kubelet[2279]: W1013 00:12:19.987393 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Oct 13 00:12:19.987750 kubelet[2279]: E1013 00:12:19.987738 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:12:19.987936 kubelet[2279]: I1013 00:12:19.987910 2279 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:12:19.992634 kubelet[2279]: I1013 00:12:19.992297 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:12:19.993078 kubelet[2279]: I1013 00:12:19.993053 2279 server.go:479] "Adding debug handlers to kubelet server" Oct 13 00:12:19.993617 kubelet[2279]: E1013 00:12:19.993374 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186de490cffb92a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 00:12:19.987526305 +0000 UTC m=+0.993130041,LastTimestamp:2025-10-13 00:12:19.987526305 +0000 UTC m=+0.993130041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 00:12:19.994420 kubelet[2279]: I1013 00:12:19.994400 2279 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:12:19.995367 kubelet[2279]: I1013 00:12:19.994719 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:12:19.995879 kubelet[2279]: I1013 00:12:19.994878 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:12:19.996211 kubelet[2279]: I1013 00:12:19.996192 2279 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 00:12:19.996742 kubelet[2279]: E1013 00:12:19.996716 2279 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:12:19.996922 kubelet[2279]: I1013 00:12:19.996898 2279 factory.go:221] Registration of the systemd container factory successfully Oct 13 00:12:19.997026 kubelet[2279]: I1013 00:12:19.997014 2279 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 00:12:19.997375 kubelet[2279]: I1013 00:12:19.997275 2279 reconciler.go:26] "Reconciler: start to sync state" Oct 13 00:12:19.997375 kubelet[2279]: I1013 00:12:19.997281 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:12:19.997783 kubelet[2279]: W1013 00:12:19.997739 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Oct 13 00:12:19.997849 kubelet[2279]: E1013 00:12:19.997789 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:12:19.997849 kubelet[2279]: E1013 00:12:19.997814 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:19.997983 kubelet[2279]: E1013 00:12:19.997961 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Oct 13 00:12:19.998366 kubelet[2279]: I1013 00:12:19.998346 2279 factory.go:221] Registration of the containerd container factory successfully Oct 13 00:12:20.007388 kubelet[2279]: I1013 00:12:20.007370 2279 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:12:20.007388 kubelet[2279]: I1013 00:12:20.007385 2279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:12:20.007511 kubelet[2279]: I1013 00:12:20.007401 2279 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:12:20.098012 kubelet[2279]: E1013 00:12:20.097979 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:20.133211 kubelet[2279]: I1013 00:12:20.133172 2279 policy_none.go:49] "None policy: Start" Oct 13 00:12:20.133211 kubelet[2279]: I1013 00:12:20.133198 2279 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 00:12:20.133211 kubelet[2279]: I1013 00:12:20.133211 2279 state_mem.go:35] "Initializing new in-memory state store" Oct 13 00:12:20.134491 kubelet[2279]: I1013 00:12:20.134441 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 00:12:20.135886 kubelet[2279]: I1013 00:12:20.135796 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 00:12:20.135886 kubelet[2279]: I1013 00:12:20.135832 2279 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 00:12:20.135886 kubelet[2279]: I1013 00:12:20.135857 2279 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:12:20.135886 kubelet[2279]: I1013 00:12:20.135864 2279 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 00:12:20.136009 kubelet[2279]: E1013 00:12:20.135904 2279 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:12:20.137240 kubelet[2279]: W1013 00:12:20.137168 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Oct 13 00:12:20.137240 kubelet[2279]: E1013 00:12:20.137223 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:12:20.139208 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 00:12:20.151916 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 00:12:20.154949 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 00:12:20.174205 kubelet[2279]: I1013 00:12:20.174167 2279 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 00:12:20.174763 kubelet[2279]: I1013 00:12:20.174336 2279 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:12:20.174763 kubelet[2279]: I1013 00:12:20.174354 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:12:20.174763 kubelet[2279]: I1013 00:12:20.174518 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:12:20.175624 kubelet[2279]: E1013 00:12:20.175604 2279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:12:20.175671 kubelet[2279]: E1013 00:12:20.175647 2279 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 00:12:20.199109 kubelet[2279]: E1013 00:12:20.199073 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Oct 13 00:12:20.243066 systemd[1]: Created slice kubepods-burstable-pod9bbf3a683d7e5def769dd59ac449f046.slice - libcontainer container kubepods-burstable-pod9bbf3a683d7e5def769dd59ac449f046.slice. Oct 13 00:12:20.253218 kubelet[2279]: E1013 00:12:20.253182 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:20.255002 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 13 00:12:20.256395 kubelet[2279]: E1013 00:12:20.256349 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:20.276605 kubelet[2279]: I1013 00:12:20.276483 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:12:20.276968 kubelet[2279]: E1013 00:12:20.276933 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Oct 13 00:12:20.278047 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 13 00:12:20.279614 kubelet[2279]: E1013 00:12:20.279593 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:20.299974 kubelet[2279]: I1013 00:12:20.299941 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:20.299974 kubelet[2279]: I1013 00:12:20.299969 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:20.300063 kubelet[2279]: I1013 00:12:20.299990 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:20.300063 kubelet[2279]: I1013 00:12:20.300007 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 00:12:20.300063 kubelet[2279]: I1013 00:12:20.300023 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bbf3a683d7e5def769dd59ac449f046-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bbf3a683d7e5def769dd59ac449f046\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:20.300063 kubelet[2279]: I1013 00:12:20.300039 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bbf3a683d7e5def769dd59ac449f046-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bbf3a683d7e5def769dd59ac449f046\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:20.300063 kubelet[2279]: I1013 00:12:20.300052 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:20.300160 kubelet[2279]: I1013 00:12:20.300066 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bbf3a683d7e5def769dd59ac449f046-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9bbf3a683d7e5def769dd59ac449f046\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:20.300160 kubelet[2279]: I1013 00:12:20.300080 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:20.478445 kubelet[2279]: I1013 00:12:20.478414 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:12:20.478754 kubelet[2279]: E1013 00:12:20.478714 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Oct 13 00:12:20.554218 containerd[1512]: time="2025-10-13T00:12:20.554121665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9bbf3a683d7e5def769dd59ac449f046,Namespace:kube-system,Attempt:0,}" Oct 13 00:12:20.557742 containerd[1512]: time="2025-10-13T00:12:20.557711745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 13 00:12:20.579137 containerd[1512]: time="2025-10-13T00:12:20.579064425Z" level=info msg="connecting to shim ba0cbe8ba179fd8ed1e9a20f8345c3ad540ea25aaf8bed3f03a6d97b9a5a18b6" address="unix:///run/containerd/s/b55b854864b63d7ff8267804806a4e0d87e11c7957d0e5868c6b179b082b995f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:20.581288 containerd[1512]: time="2025-10-13T00:12:20.581259545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 13 00:12:20.583073 containerd[1512]: time="2025-10-13T00:12:20.582919505Z" level=info msg="connecting to shim 7cdb9ffb7ce2b296f28925b6aeb720e356c495827b129348180639d262cd8395" address="unix:///run/containerd/s/01041db22fc366f913888559665bad953c8c7e8af0c608d6b9a34fd39097a5df" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:20.600471 kubelet[2279]: E1013 00:12:20.600289 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Oct 13 00:12:20.604107 containerd[1512]: time="2025-10-13T00:12:20.604006425Z" level=info msg="connecting to shim 62fc88b0ea72156f1c16b07fec3c10fdf180fed05a1b8940957cc77b84a4d492" address="unix:///run/containerd/s/7b38bf57394466e807394a56db508b00a5dda86ccc349a538317156ad3503afa" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:20.612604 systemd[1]: Started cri-containerd-7cdb9ffb7ce2b296f28925b6aeb720e356c495827b129348180639d262cd8395.scope - libcontainer container 7cdb9ffb7ce2b296f28925b6aeb720e356c495827b129348180639d262cd8395. Oct 13 00:12:20.613657 systemd[1]: Started cri-containerd-ba0cbe8ba179fd8ed1e9a20f8345c3ad540ea25aaf8bed3f03a6d97b9a5a18b6.scope - libcontainer container ba0cbe8ba179fd8ed1e9a20f8345c3ad540ea25aaf8bed3f03a6d97b9a5a18b6. Oct 13 00:12:20.629680 systemd[1]: Started cri-containerd-62fc88b0ea72156f1c16b07fec3c10fdf180fed05a1b8940957cc77b84a4d492.scope - libcontainer container 62fc88b0ea72156f1c16b07fec3c10fdf180fed05a1b8940957cc77b84a4d492. Oct 13 00:12:20.654867 containerd[1512]: time="2025-10-13T00:12:20.654828145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cdb9ffb7ce2b296f28925b6aeb720e356c495827b129348180639d262cd8395\"" Oct 13 00:12:20.659634 containerd[1512]: time="2025-10-13T00:12:20.659603865Z" level=info msg="CreateContainer within sandbox \"7cdb9ffb7ce2b296f28925b6aeb720e356c495827b129348180639d262cd8395\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 00:12:20.662759 containerd[1512]: time="2025-10-13T00:12:20.662718345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9bbf3a683d7e5def769dd59ac449f046,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba0cbe8ba179fd8ed1e9a20f8345c3ad540ea25aaf8bed3f03a6d97b9a5a18b6\"" Oct 13 00:12:20.664801 containerd[1512]: time="2025-10-13T00:12:20.664769145Z" level=info msg="CreateContainer within sandbox \"ba0cbe8ba179fd8ed1e9a20f8345c3ad540ea25aaf8bed3f03a6d97b9a5a18b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 00:12:20.669280 containerd[1512]: time="2025-10-13T00:12:20.669241665Z" level=info msg="Container 1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:20.676792 containerd[1512]: time="2025-10-13T00:12:20.676761665Z" level=info msg="Container 29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:20.677074 containerd[1512]: time="2025-10-13T00:12:20.677047985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"62fc88b0ea72156f1c16b07fec3c10fdf180fed05a1b8940957cc77b84a4d492\"" Oct 13 00:12:20.679646 containerd[1512]: time="2025-10-13T00:12:20.679611585Z" level=info msg="CreateContainer within sandbox \"62fc88b0ea72156f1c16b07fec3c10fdf180fed05a1b8940957cc77b84a4d492\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 00:12:20.680230 containerd[1512]: time="2025-10-13T00:12:20.680199025Z" level=info msg="CreateContainer within sandbox \"7cdb9ffb7ce2b296f28925b6aeb720e356c495827b129348180639d262cd8395\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e\"" Oct 13 00:12:20.681637 containerd[1512]: time="2025-10-13T00:12:20.681608465Z" level=info msg="StartContainer for \"1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e\"" Oct 13 00:12:20.682672 containerd[1512]: time="2025-10-13T00:12:20.682638065Z" level=info msg="connecting to shim 1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e" address="unix:///run/containerd/s/01041db22fc366f913888559665bad953c8c7e8af0c608d6b9a34fd39097a5df" protocol=ttrpc version=3 Oct 13 00:12:20.683833 containerd[1512]: time="2025-10-13T00:12:20.683755745Z" level=info msg="CreateContainer within sandbox \"ba0cbe8ba179fd8ed1e9a20f8345c3ad540ea25aaf8bed3f03a6d97b9a5a18b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1\"" Oct 13 00:12:20.684383 containerd[1512]: time="2025-10-13T00:12:20.684085305Z" level=info msg="StartContainer for \"29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1\"" Oct 13 00:12:20.685299 containerd[1512]: time="2025-10-13T00:12:20.685262945Z" level=info msg="connecting to shim 29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1" address="unix:///run/containerd/s/b55b854864b63d7ff8267804806a4e0d87e11c7957d0e5868c6b179b082b995f" protocol=ttrpc version=3 Oct 13 00:12:20.686360 containerd[1512]: time="2025-10-13T00:12:20.686058825Z" level=info msg="Container 43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:20.693708 containerd[1512]: time="2025-10-13T00:12:20.693665345Z" level=info msg="CreateContainer within sandbox \"62fc88b0ea72156f1c16b07fec3c10fdf180fed05a1b8940957cc77b84a4d492\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f\"" Oct 13 00:12:20.694101 containerd[1512]: time="2025-10-13T00:12:20.694055985Z" level=info msg="StartContainer for \"43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f\"" Oct 13 00:12:20.695915 containerd[1512]: time="2025-10-13T00:12:20.695862585Z" level=info msg="connecting to shim 43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f" address="unix:///run/containerd/s/7b38bf57394466e807394a56db508b00a5dda86ccc349a538317156ad3503afa" protocol=ttrpc version=3 Oct 13 00:12:20.703673 systemd[1]: Started cri-containerd-1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e.scope - libcontainer container 1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e. Oct 13 00:12:20.706529 systemd[1]: Started cri-containerd-29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1.scope - libcontainer container 29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1. Oct 13 00:12:20.743643 systemd[1]: Started cri-containerd-43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f.scope - libcontainer container 43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f. Oct 13 00:12:20.783366 containerd[1512]: time="2025-10-13T00:12:20.783286865Z" level=info msg="StartContainer for \"29220c0a1634967508953ac26fc970da4c11c6a41b48b13730a484bdf0ed67c1\" returns successfully" Oct 13 00:12:20.786808 containerd[1512]: time="2025-10-13T00:12:20.786685305Z" level=info msg="StartContainer for \"1ce0a337bf0d4b3a5d33486080197b4486edc47f123dec782c2e808cbec0757e\" returns successfully" Oct 13 00:12:20.801926 containerd[1512]: time="2025-10-13T00:12:20.801792985Z" level=info msg="StartContainer for \"43934f756e1c39fda8143ae673c6c94de3fdd1a1f24cfe37bcb34e186c06730f\" returns successfully" Oct 13 00:12:20.852509 kubelet[2279]: W1013 00:12:20.852279 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Oct 13 00:12:20.852509 kubelet[2279]: E1013 00:12:20.852347 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:12:20.880713 kubelet[2279]: I1013 00:12:20.880683 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:12:21.143551 kubelet[2279]: E1013 00:12:21.143432 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:21.148614 kubelet[2279]: E1013 00:12:21.148590 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:21.149807 kubelet[2279]: E1013 00:12:21.149774 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:22.151721 kubelet[2279]: E1013 00:12:22.151500 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:22.151721 kubelet[2279]: E1013 00:12:22.151573 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:12:22.715082 kubelet[2279]: E1013 00:12:22.714949 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 00:12:22.795125 kubelet[2279]: I1013 00:12:22.795062 2279 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 00:12:22.795125 kubelet[2279]: E1013 00:12:22.795111 2279 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 00:12:22.810037 kubelet[2279]: E1013 00:12:22.809998 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:22.910372 kubelet[2279]: E1013 00:12:22.910331 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:23.011324 kubelet[2279]: E1013 00:12:23.011192 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:23.111405 kubelet[2279]: E1013 00:12:23.111327 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:23.198205 kubelet[2279]: I1013 00:12:23.198161 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:23.204201 kubelet[2279]: E1013 00:12:23.204172 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:23.204201 kubelet[2279]: I1013 00:12:23.204200 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:23.206228 kubelet[2279]: E1013 00:12:23.206031 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:23.206228 kubelet[2279]: I1013 00:12:23.206060 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:12:23.207604 kubelet[2279]: E1013 00:12:23.207582 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 00:12:23.427018 kubelet[2279]: I1013 00:12:23.426883 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:23.429081 kubelet[2279]: E1013 00:12:23.429022 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:23.986216 kubelet[2279]: I1013 00:12:23.986171 2279 apiserver.go:52] "Watching apiserver" Oct 13 00:12:23.997928 kubelet[2279]: I1013 00:12:23.997667 2279 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 00:12:24.923464 systemd[1]: Reload requested from client PID 2550 ('systemctl') (unit session-7.scope)... Oct 13 00:12:24.923478 systemd[1]: Reloading... Oct 13 00:12:24.995492 zram_generator::config[2593]: No configuration found. Oct 13 00:12:25.157875 systemd[1]: Reloading finished in 234 ms. Oct 13 00:12:25.190690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:12:25.211403 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:12:25.212544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:25.212612 systemd[1]: kubelet.service: Consumed 1.373s CPU time, 126.7M memory peak. Oct 13 00:12:25.215712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:12:25.376716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:12:25.380305 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:12:25.426595 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:12:25.427515 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:12:25.427515 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:12:25.427515 kubelet[2635]: I1013 00:12:25.427009 2635 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:12:25.432638 kubelet[2635]: I1013 00:12:25.432611 2635 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 00:12:25.432638 kubelet[2635]: I1013 00:12:25.432637 2635 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:12:25.432971 kubelet[2635]: I1013 00:12:25.432937 2635 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 00:12:25.434206 kubelet[2635]: I1013 00:12:25.434180 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 00:12:25.436407 kubelet[2635]: I1013 00:12:25.436300 2635 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:12:25.441248 kubelet[2635]: I1013 00:12:25.441164 2635 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:12:25.444147 kubelet[2635]: I1013 00:12:25.444110 2635 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 00:12:25.444526 kubelet[2635]: I1013 00:12:25.444495 2635 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:12:25.444747 kubelet[2635]: I1013 00:12:25.444585 2635 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:12:25.444899 kubelet[2635]: I1013 00:12:25.444884 2635 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:12:25.444963 kubelet[2635]: I1013 00:12:25.444954 2635 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 00:12:25.445076 kubelet[2635]: I1013 00:12:25.445063 2635 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:12:25.445285 kubelet[2635]: I1013 00:12:25.445274 2635 kubelet.go:446] "Attempting to sync node with API server" Oct 13 00:12:25.445988 kubelet[2635]: I1013 00:12:25.445964 2635 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:12:25.446132 kubelet[2635]: I1013 00:12:25.446118 2635 kubelet.go:352] "Adding apiserver pod source" Oct 13 00:12:25.446519 kubelet[2635]: I1013 00:12:25.446176 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:12:25.447022 kubelet[2635]: I1013 00:12:25.446993 2635 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:12:25.447538 kubelet[2635]: I1013 00:12:25.447513 2635 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 00:12:25.448416 kubelet[2635]: I1013 00:12:25.448017 2635 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 00:12:25.448416 kubelet[2635]: I1013 00:12:25.448055 2635 server.go:1287] "Started kubelet" Oct 13 00:12:25.449276 kubelet[2635]: I1013 00:12:25.448912 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:12:25.451197 kubelet[2635]: I1013 00:12:25.450562 2635 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:12:25.451197 kubelet[2635]: I1013 00:12:25.449768 2635 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:12:25.451197 kubelet[2635]: I1013 00:12:25.449797 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:12:25.452410 kubelet[2635]: I1013 00:12:25.451869 2635 server.go:479] "Adding debug handlers to kubelet server" Oct 13 00:12:25.455465 kubelet[2635]: I1013 00:12:25.453263 2635 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 00:12:25.455465 kubelet[2635]: I1013 00:12:25.453360 2635 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 00:12:25.455465 kubelet[2635]: I1013 00:12:25.453480 2635 reconciler.go:26] "Reconciler: start to sync state" Oct 13 00:12:25.455683 kubelet[2635]: I1013 00:12:25.449860 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:12:25.460029 kubelet[2635]: E1013 00:12:25.459830 2635 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:12:25.471631 kubelet[2635]: E1013 00:12:25.471561 2635 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:12:25.471782 kubelet[2635]: I1013 00:12:25.471609 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 00:12:25.471782 kubelet[2635]: I1013 00:12:25.471771 2635 factory.go:221] Registration of the systemd container factory successfully Oct 13 00:12:25.471929 kubelet[2635]: I1013 00:12:25.471882 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:12:25.474680 kubelet[2635]: I1013 00:12:25.474648 2635 factory.go:221] Registration of the containerd container factory successfully Oct 13 00:12:25.477627 kubelet[2635]: I1013 00:12:25.477603 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 00:12:25.477797 kubelet[2635]: I1013 00:12:25.477726 2635 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 00:12:25.477797 kubelet[2635]: I1013 00:12:25.477749 2635 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:12:25.477797 kubelet[2635]: I1013 00:12:25.477756 2635 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 00:12:25.477939 kubelet[2635]: E1013 00:12:25.477916 2635 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:12:25.506060 kubelet[2635]: I1013 00:12:25.506032 2635 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:12:25.506060 kubelet[2635]: I1013 00:12:25.506052 2635 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:12:25.506190 kubelet[2635]: I1013 00:12:25.506074 2635 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:12:25.506245 kubelet[2635]: I1013 00:12:25.506228 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 00:12:25.506272 kubelet[2635]: I1013 00:12:25.506244 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 00:12:25.506272 kubelet[2635]: I1013 00:12:25.506261 2635 policy_none.go:49] "None policy: Start" Oct 13 00:12:25.506272 kubelet[2635]: I1013 00:12:25.506269 2635 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 00:12:25.506344 kubelet[2635]: I1013 00:12:25.506278 2635 state_mem.go:35] "Initializing new in-memory state store" Oct 13 00:12:25.506379 kubelet[2635]: I1013 00:12:25.506363 2635 state_mem.go:75] "Updated machine memory state" Oct 13 00:12:25.510237 kubelet[2635]: I1013 00:12:25.510218 2635 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 00:12:25.510388 kubelet[2635]: I1013 00:12:25.510374 2635 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:12:25.510439 kubelet[2635]: I1013 00:12:25.510393 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:12:25.510870 kubelet[2635]: I1013 00:12:25.510853 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:12:25.512105 kubelet[2635]: E1013 00:12:25.512081 2635 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:12:25.579620 kubelet[2635]: I1013 00:12:25.579385 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:25.579620 kubelet[2635]: I1013 00:12:25.579440 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:12:25.579620 kubelet[2635]: I1013 00:12:25.579481 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:25.612508 kubelet[2635]: I1013 00:12:25.612481 2635 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:12:25.619495 kubelet[2635]: I1013 00:12:25.619272 2635 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 00:12:25.619495 kubelet[2635]: I1013 00:12:25.619350 2635 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 00:12:25.755515 kubelet[2635]: I1013 00:12:25.755387 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:25.755515 kubelet[2635]: I1013 00:12:25.755428 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:25.755515 kubelet[2635]: I1013 00:12:25.755462 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 00:12:25.755515 kubelet[2635]: I1013 00:12:25.755496 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bbf3a683d7e5def769dd59ac449f046-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bbf3a683d7e5def769dd59ac449f046\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:25.755684 kubelet[2635]: I1013 00:12:25.755525 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bbf3a683d7e5def769dd59ac449f046-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9bbf3a683d7e5def769dd59ac449f046\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:25.755684 kubelet[2635]: I1013 00:12:25.755555 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:25.755684 kubelet[2635]: I1013 00:12:25.755583 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:25.755684 kubelet[2635]: I1013 00:12:25.755610 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:12:25.755684 kubelet[2635]: I1013 00:12:25.755637 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bbf3a683d7e5def769dd59ac449f046-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bbf3a683d7e5def769dd59ac449f046\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:26.447373 kubelet[2635]: I1013 00:12:26.447143 2635 apiserver.go:52] "Watching apiserver" Oct 13 00:12:26.453958 kubelet[2635]: I1013 00:12:26.453930 2635 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 00:12:26.489640 kubelet[2635]: I1013 00:12:26.489520 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:26.496699 kubelet[2635]: E1013 00:12:26.496595 2635 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 00:12:26.511699 kubelet[2635]: I1013 00:12:26.510684 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5106694649999999 podStartE2EDuration="1.510669465s" podCreationTimestamp="2025-10-13 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:12:26.509626505 +0000 UTC m=+1.125605841" watchObservedRunningTime="2025-10-13 00:12:26.510669465 +0000 UTC m=+1.126648801" Oct 13 00:12:26.518372 kubelet[2635]: I1013 00:12:26.518331 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.518319905 podStartE2EDuration="1.518319905s" podCreationTimestamp="2025-10-13 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:12:26.517920785 +0000 UTC m=+1.133900121" watchObservedRunningTime="2025-10-13 00:12:26.518319905 +0000 UTC m=+1.134299241" Oct 13 00:12:26.526062 kubelet[2635]: I1013 00:12:26.525998 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.525989185 podStartE2EDuration="1.525989185s" podCreationTimestamp="2025-10-13 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:12:26.525778185 +0000 UTC m=+1.141757521" watchObservedRunningTime="2025-10-13 00:12:26.525989185 +0000 UTC m=+1.141968521" Oct 13 00:12:31.863443 kubelet[2635]: I1013 00:12:31.863408 2635 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 00:12:31.864155 containerd[1512]: time="2025-10-13T00:12:31.864115621Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 00:12:31.864969 kubelet[2635]: I1013 00:12:31.864499 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 00:12:32.853269 systemd[1]: Created slice kubepods-besteffort-pod37c3a42a_c5f9_49c0_8293_b0001fcf88a4.slice - libcontainer container kubepods-besteffort-pod37c3a42a_c5f9_49c0_8293_b0001fcf88a4.slice. Oct 13 00:12:32.900634 kubelet[2635]: I1013 00:12:32.900576 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37c3a42a-c5f9-49c0-8293-b0001fcf88a4-xtables-lock\") pod \"kube-proxy-5qdtf\" (UID: \"37c3a42a-c5f9-49c0-8293-b0001fcf88a4\") " pod="kube-system/kube-proxy-5qdtf" Oct 13 00:12:32.900989 kubelet[2635]: I1013 00:12:32.900665 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37c3a42a-c5f9-49c0-8293-b0001fcf88a4-kube-proxy\") pod \"kube-proxy-5qdtf\" (UID: \"37c3a42a-c5f9-49c0-8293-b0001fcf88a4\") " pod="kube-system/kube-proxy-5qdtf" Oct 13 00:12:32.900989 kubelet[2635]: I1013 00:12:32.900699 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37c3a42a-c5f9-49c0-8293-b0001fcf88a4-lib-modules\") pod \"kube-proxy-5qdtf\" (UID: \"37c3a42a-c5f9-49c0-8293-b0001fcf88a4\") " pod="kube-system/kube-proxy-5qdtf" Oct 13 00:12:32.900989 kubelet[2635]: I1013 00:12:32.900716 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zgp\" (UniqueName: \"kubernetes.io/projected/37c3a42a-c5f9-49c0-8293-b0001fcf88a4-kube-api-access-k2zgp\") pod \"kube-proxy-5qdtf\" (UID: \"37c3a42a-c5f9-49c0-8293-b0001fcf88a4\") " pod="kube-system/kube-proxy-5qdtf" Oct 13 00:12:32.966644 systemd[1]: Created slice kubepods-besteffort-pod84a9a80d_23ae_4958_a3a5_58751d8586e5.slice - libcontainer container kubepods-besteffort-pod84a9a80d_23ae_4958_a3a5_58751d8586e5.slice. Oct 13 00:12:33.001000 kubelet[2635]: I1013 00:12:33.000944 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84a9a80d-23ae-4958-a3a5-58751d8586e5-var-lib-calico\") pod \"tigera-operator-755d956888-xtz9l\" (UID: \"84a9a80d-23ae-4958-a3a5-58751d8586e5\") " pod="tigera-operator/tigera-operator-755d956888-xtz9l" Oct 13 00:12:33.001133 kubelet[2635]: I1013 00:12:33.001016 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spszb\" (UniqueName: \"kubernetes.io/projected/84a9a80d-23ae-4958-a3a5-58751d8586e5-kube-api-access-spszb\") pod \"tigera-operator-755d956888-xtz9l\" (UID: \"84a9a80d-23ae-4958-a3a5-58751d8586e5\") " pod="tigera-operator/tigera-operator-755d956888-xtz9l" Oct 13 00:12:33.165304 containerd[1512]: time="2025-10-13T00:12:33.164990820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qdtf,Uid:37c3a42a-c5f9-49c0-8293-b0001fcf88a4,Namespace:kube-system,Attempt:0,}" Oct 13 00:12:33.270922 containerd[1512]: time="2025-10-13T00:12:33.270869502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xtz9l,Uid:84a9a80d-23ae-4958-a3a5-58751d8586e5,Namespace:tigera-operator,Attempt:0,}" Oct 13 00:12:33.360762 containerd[1512]: time="2025-10-13T00:12:33.360696713Z" level=info msg="connecting to shim 0f706c29bc316c2cd063c06cf2598506a67dfcbe2ec9ede3a9b38e7a012b82c9" address="unix:///run/containerd/s/d2fc88bb89bef0dbcdde6d8c2c773b1593270c8b9a3e4bcc3ed4ebc65028567c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:33.365295 containerd[1512]: time="2025-10-13T00:12:33.365211321Z" level=info msg="connecting to shim db4c164a78ed37267181e5189720b955c391cbb51697a602a97fdcfaaea63144" address="unix:///run/containerd/s/239867e720bcb0661a9e762c1fd185ec7d3027855609ed328487a2fcf128dbb2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:33.387675 systemd[1]: Started cri-containerd-0f706c29bc316c2cd063c06cf2598506a67dfcbe2ec9ede3a9b38e7a012b82c9.scope - libcontainer container 0f706c29bc316c2cd063c06cf2598506a67dfcbe2ec9ede3a9b38e7a012b82c9. Oct 13 00:12:33.391053 systemd[1]: Started cri-containerd-db4c164a78ed37267181e5189720b955c391cbb51697a602a97fdcfaaea63144.scope - libcontainer container db4c164a78ed37267181e5189720b955c391cbb51697a602a97fdcfaaea63144. Oct 13 00:12:33.417827 containerd[1512]: time="2025-10-13T00:12:33.417715261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qdtf,Uid:37c3a42a-c5f9-49c0-8293-b0001fcf88a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f706c29bc316c2cd063c06cf2598506a67dfcbe2ec9ede3a9b38e7a012b82c9\"" Oct 13 00:12:33.421776 containerd[1512]: time="2025-10-13T00:12:33.421735029Z" level=info msg="CreateContainer within sandbox \"0f706c29bc316c2cd063c06cf2598506a67dfcbe2ec9ede3a9b38e7a012b82c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 00:12:33.432970 containerd[1512]: time="2025-10-13T00:12:33.432892810Z" level=info msg="Container 34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:33.434774 containerd[1512]: time="2025-10-13T00:12:33.434732254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xtz9l,Uid:84a9a80d-23ae-4958-a3a5-58751d8586e5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"db4c164a78ed37267181e5189720b955c391cbb51697a602a97fdcfaaea63144\"" Oct 13 00:12:33.436867 containerd[1512]: time="2025-10-13T00:12:33.436729737Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 00:12:33.443020 containerd[1512]: time="2025-10-13T00:12:33.442965469Z" level=info msg="CreateContainer within sandbox \"0f706c29bc316c2cd063c06cf2598506a67dfcbe2ec9ede3a9b38e7a012b82c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d\"" Oct 13 00:12:33.443812 containerd[1512]: time="2025-10-13T00:12:33.443771151Z" level=info msg="StartContainer for \"34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d\"" Oct 13 00:12:33.445344 containerd[1512]: time="2025-10-13T00:12:33.445316954Z" level=info msg="connecting to shim 34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d" address="unix:///run/containerd/s/d2fc88bb89bef0dbcdde6d8c2c773b1593270c8b9a3e4bcc3ed4ebc65028567c" protocol=ttrpc version=3 Oct 13 00:12:33.466687 systemd[1]: Started cri-containerd-34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d.scope - libcontainer container 34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d. Oct 13 00:12:33.502883 containerd[1512]: time="2025-10-13T00:12:33.502839943Z" level=info msg="StartContainer for \"34973353e0c048f49a145cadd8598f50c05e24771d8945444c8a1b5b96f16b5d\" returns successfully" Oct 13 00:12:34.529206 kubelet[2635]: I1013 00:12:34.529121 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5qdtf" podStartSLOduration=2.5291019930000003 podStartE2EDuration="2.529101993s" podCreationTimestamp="2025-10-13 00:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:12:34.528909713 +0000 UTC m=+9.144889049" watchObservedRunningTime="2025-10-13 00:12:34.529101993 +0000 UTC m=+9.145081329" Oct 13 00:12:35.070950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179623524.mount: Deactivated successfully. Oct 13 00:12:35.348776 containerd[1512]: time="2025-10-13T00:12:35.348512417Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:35.350052 containerd[1512]: time="2025-10-13T00:12:35.349867419Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 13 00:12:35.350864 containerd[1512]: time="2025-10-13T00:12:35.350831020Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:35.352925 containerd[1512]: time="2025-10-13T00:12:35.352897024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:35.354166 containerd[1512]: time="2025-10-13T00:12:35.354067946Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.917300369s" Oct 13 00:12:35.354166 containerd[1512]: time="2025-10-13T00:12:35.354097346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 13 00:12:35.355903 containerd[1512]: time="2025-10-13T00:12:35.355874629Z" level=info msg="CreateContainer within sandbox \"db4c164a78ed37267181e5189720b955c391cbb51697a602a97fdcfaaea63144\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 00:12:35.362393 containerd[1512]: time="2025-10-13T00:12:35.361851599Z" level=info msg="Container 0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:35.367443 containerd[1512]: time="2025-10-13T00:12:35.367399408Z" level=info msg="CreateContainer within sandbox \"db4c164a78ed37267181e5189720b955c391cbb51697a602a97fdcfaaea63144\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8\"" Oct 13 00:12:35.367922 containerd[1512]: time="2025-10-13T00:12:35.367878649Z" level=info msg="StartContainer for \"0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8\"" Oct 13 00:12:35.368673 containerd[1512]: time="2025-10-13T00:12:35.368643530Z" level=info msg="connecting to shim 0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8" address="unix:///run/containerd/s/239867e720bcb0661a9e762c1fd185ec7d3027855609ed328487a2fcf128dbb2" protocol=ttrpc version=3 Oct 13 00:12:35.388603 systemd[1]: Started cri-containerd-0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8.scope - libcontainer container 0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8. Oct 13 00:12:35.411240 containerd[1512]: time="2025-10-13T00:12:35.411206761Z" level=info msg="StartContainer for \"0552993bd16f97828cb2a67f9e9f240b9dc65d5112112df8eaebb043b2ff67e8\" returns successfully" Oct 13 00:12:36.826057 kubelet[2635]: I1013 00:12:36.825992 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-xtz9l" podStartSLOduration=2.907398952 podStartE2EDuration="4.825973203s" podCreationTimestamp="2025-10-13 00:12:32 +0000 UTC" firstStartedPulling="2025-10-13 00:12:33.436168616 +0000 UTC m=+8.052147952" lastFinishedPulling="2025-10-13 00:12:35.354742867 +0000 UTC m=+9.970722203" observedRunningTime="2025-10-13 00:12:35.533183005 +0000 UTC m=+10.149162341" watchObservedRunningTime="2025-10-13 00:12:36.825973203 +0000 UTC m=+11.441952539" Oct 13 00:12:40.800643 sudo[1717]: pam_unix(sudo:session): session closed for user root Oct 13 00:12:40.803738 sshd[1716]: Connection closed by 10.0.0.1 port 47296 Oct 13 00:12:40.804349 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Oct 13 00:12:40.808477 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:12:40.810727 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:47296.service: Deactivated successfully. Oct 13 00:12:40.812426 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:12:40.814677 systemd[1]: session-7.scope: Consumed 6.209s CPU time, 222.2M memory peak. Oct 13 00:12:40.816013 systemd-logind[1484]: Removed session 7. Oct 13 00:12:42.039546 update_engine[1491]: I20251013 00:12:42.039480 1491 update_attempter.cc:509] Updating boot flags... Oct 13 00:12:45.675519 systemd[1]: Created slice kubepods-besteffort-poddd6b6e39_c234_4cf9_a555_f7f93356f34f.slice - libcontainer container kubepods-besteffort-poddd6b6e39_c234_4cf9_a555_f7f93356f34f.slice. Oct 13 00:12:45.686417 kubelet[2635]: I1013 00:12:45.686375 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd6b6e39-c234-4cf9-a555-f7f93356f34f-tigera-ca-bundle\") pod \"calico-typha-77894d49c8-rdhts\" (UID: \"dd6b6e39-c234-4cf9-a555-f7f93356f34f\") " pod="calico-system/calico-typha-77894d49c8-rdhts" Oct 13 00:12:45.686417 kubelet[2635]: I1013 00:12:45.686418 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd6b6e39-c234-4cf9-a555-f7f93356f34f-typha-certs\") pod \"calico-typha-77894d49c8-rdhts\" (UID: \"dd6b6e39-c234-4cf9-a555-f7f93356f34f\") " pod="calico-system/calico-typha-77894d49c8-rdhts" Oct 13 00:12:45.686826 kubelet[2635]: I1013 00:12:45.686442 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zbt\" (UniqueName: \"kubernetes.io/projected/dd6b6e39-c234-4cf9-a555-f7f93356f34f-kube-api-access-47zbt\") pod \"calico-typha-77894d49c8-rdhts\" (UID: \"dd6b6e39-c234-4cf9-a555-f7f93356f34f\") " pod="calico-system/calico-typha-77894d49c8-rdhts" Oct 13 00:12:45.924956 systemd[1]: Created slice kubepods-besteffort-pod29774136_4a30_4d6e_8b43_2bc955b37a04.slice - libcontainer container kubepods-besteffort-pod29774136_4a30_4d6e_8b43_2bc955b37a04.slice. Oct 13 00:12:45.978709 containerd[1512]: time="2025-10-13T00:12:45.978341581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77894d49c8-rdhts,Uid:dd6b6e39-c234-4cf9-a555-f7f93356f34f,Namespace:calico-system,Attempt:0,}" Oct 13 00:12:45.988924 kubelet[2635]: I1013 00:12:45.988876 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-var-run-calico\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.988924 kubelet[2635]: I1013 00:12:45.988925 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-cni-bin-dir\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989055 kubelet[2635]: I1013 00:12:45.988942 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-cni-log-dir\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989055 kubelet[2635]: I1013 00:12:45.988960 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-lib-modules\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989055 kubelet[2635]: I1013 00:12:45.988975 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/29774136-4a30-4d6e-8b43-2bc955b37a04-node-certs\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989055 kubelet[2635]: I1013 00:12:45.988991 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-policysync\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989055 kubelet[2635]: I1013 00:12:45.989007 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-var-lib-calico\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989161 kubelet[2635]: I1013 00:12:45.989024 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29774136-4a30-4d6e-8b43-2bc955b37a04-tigera-ca-bundle\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989161 kubelet[2635]: I1013 00:12:45.989044 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-flexvol-driver-host\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989161 kubelet[2635]: I1013 00:12:45.989062 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-cni-net-dir\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989161 kubelet[2635]: I1013 00:12:45.989077 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29774136-4a30-4d6e-8b43-2bc955b37a04-xtables-lock\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:45.989161 kubelet[2635]: I1013 00:12:45.989090 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64lt\" (UniqueName: \"kubernetes.io/projected/29774136-4a30-4d6e-8b43-2bc955b37a04-kube-api-access-p64lt\") pod \"calico-node-vdqtf\" (UID: \"29774136-4a30-4d6e-8b43-2bc955b37a04\") " pod="calico-system/calico-node-vdqtf" Oct 13 00:12:46.024148 containerd[1512]: time="2025-10-13T00:12:46.024080780Z" level=info msg="connecting to shim 09f1fd9164949076626bec5f9f51ab3db9c6c92653a124fe6048101726b4db22" address="unix:///run/containerd/s/2f8f569c873d21efa0d368afee8abf52d15e78a8b12c09a0796d01f09099eaf1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:46.074634 systemd[1]: Started cri-containerd-09f1fd9164949076626bec5f9f51ab3db9c6c92653a124fe6048101726b4db22.scope - libcontainer container 09f1fd9164949076626bec5f9f51ab3db9c6c92653a124fe6048101726b4db22. Oct 13 00:12:46.092009 kubelet[2635]: E1013 00:12:46.091956 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.092273 kubelet[2635]: W1013 00:12:46.092000 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.093391 kubelet[2635]: E1013 00:12:46.093284 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.093391 kubelet[2635]: W1013 00:12:46.093387 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.093755 kubelet[2635]: E1013 00:12:46.093416 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.094336 kubelet[2635]: E1013 00:12:46.094272 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.094439 kubelet[2635]: E1013 00:12:46.094362 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.095119 kubelet[2635]: W1013 00:12:46.095091 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.095119 kubelet[2635]: E1013 00:12:46.095127 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.095780 kubelet[2635]: E1013 00:12:46.095668 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.095823 kubelet[2635]: W1013 00:12:46.095780 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.095823 kubelet[2635]: E1013 00:12:46.095801 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.096868 kubelet[2635]: E1013 00:12:46.096777 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.097507 kubelet[2635]: W1013 00:12:46.097483 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.097546 kubelet[2635]: E1013 00:12:46.097515 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.098815 kubelet[2635]: E1013 00:12:46.098783 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.098815 kubelet[2635]: W1013 00:12:46.098803 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.098958 kubelet[2635]: E1013 00:12:46.098937 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.099529 kubelet[2635]: E1013 00:12:46.099505 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.099595 kubelet[2635]: W1013 00:12:46.099526 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.099782 kubelet[2635]: E1013 00:12:46.099755 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.099782 kubelet[2635]: W1013 00:12:46.099778 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.099922 kubelet[2635]: E1013 00:12:46.099908 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.099951 kubelet[2635]: W1013 00:12:46.099919 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.099951 kubelet[2635]: E1013 00:12:46.099938 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.100084 kubelet[2635]: E1013 00:12:46.100070 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.100113 kubelet[2635]: W1013 00:12:46.100087 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.100113 kubelet[2635]: E1013 00:12:46.100097 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.100269 kubelet[2635]: E1013 00:12:46.100252 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.100269 kubelet[2635]: W1013 00:12:46.100264 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.100316 kubelet[2635]: E1013 00:12:46.100272 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.101279 kubelet[2635]: E1013 00:12:46.101026 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.101279 kubelet[2635]: E1013 00:12:46.101055 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.113853 kubelet[2635]: E1013 00:12:46.113523 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxnvf" podUID="ca6273ce-aeab-40b7-bf01-7ab04b5d2d19" Oct 13 00:12:46.114198 kubelet[2635]: E1013 00:12:46.114174 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.114198 kubelet[2635]: W1013 00:12:46.114192 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.114258 kubelet[2635]: E1013 00:12:46.114211 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.115252 kubelet[2635]: E1013 00:12:46.115226 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.115252 kubelet[2635]: W1013 00:12:46.115248 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.115346 kubelet[2635]: E1013 00:12:46.115263 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.178856 kubelet[2635]: E1013 00:12:46.178822 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.178856 kubelet[2635]: W1013 00:12:46.178846 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.178856 kubelet[2635]: E1013 00:12:46.178868 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.179614 kubelet[2635]: E1013 00:12:46.179046 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.179614 kubelet[2635]: W1013 00:12:46.179055 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.179614 kubelet[2635]: E1013 00:12:46.179139 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.179614 kubelet[2635]: E1013 00:12:46.179288 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.179614 kubelet[2635]: W1013 00:12:46.179298 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.179614 kubelet[2635]: E1013 00:12:46.179306 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.179614 kubelet[2635]: E1013 00:12:46.179418 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.179614 kubelet[2635]: W1013 00:12:46.179425 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.179614 kubelet[2635]: E1013 00:12:46.179432 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.180295 kubelet[2635]: E1013 00:12:46.179704 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.180295 kubelet[2635]: W1013 00:12:46.179714 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.180295 kubelet[2635]: E1013 00:12:46.179723 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.180295 kubelet[2635]: E1013 00:12:46.180132 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.180295 kubelet[2635]: W1013 00:12:46.180144 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.180295 kubelet[2635]: E1013 00:12:46.180155 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.180959 kubelet[2635]: E1013 00:12:46.180329 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.180959 kubelet[2635]: W1013 00:12:46.180337 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.180959 kubelet[2635]: E1013 00:12:46.180366 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.180959 kubelet[2635]: E1013 00:12:46.180534 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.180959 kubelet[2635]: W1013 00:12:46.180543 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.180959 kubelet[2635]: E1013 00:12:46.180551 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.180959 kubelet[2635]: E1013 00:12:46.180684 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.180959 kubelet[2635]: W1013 00:12:46.180693 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.180959 kubelet[2635]: E1013 00:12:46.180735 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.182209 kubelet[2635]: E1013 00:12:46.181398 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.182209 kubelet[2635]: W1013 00:12:46.181413 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.182209 kubelet[2635]: E1013 00:12:46.181425 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.182209 kubelet[2635]: E1013 00:12:46.181597 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.182209 kubelet[2635]: W1013 00:12:46.181605 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.182209 kubelet[2635]: E1013 00:12:46.181613 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.182209 kubelet[2635]: E1013 00:12:46.181881 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.182209 kubelet[2635]: W1013 00:12:46.181892 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.182209 kubelet[2635]: E1013 00:12:46.181901 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.182608 kubelet[2635]: E1013 00:12:46.182587 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.182608 kubelet[2635]: W1013 00:12:46.182600 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.182616 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.182776 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.183417 kubelet[2635]: W1013 00:12:46.182785 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.182792 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.182913 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.183417 kubelet[2635]: W1013 00:12:46.182920 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.182927 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.183039 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.183417 kubelet[2635]: W1013 00:12:46.183045 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183417 kubelet[2635]: E1013 00:12:46.183053 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.183694 kubelet[2635]: E1013 00:12:46.183265 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.183694 kubelet[2635]: W1013 00:12:46.183274 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183694 kubelet[2635]: E1013 00:12:46.183281 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.183694 kubelet[2635]: E1013 00:12:46.183491 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.183694 kubelet[2635]: W1013 00:12:46.183501 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183694 kubelet[2635]: E1013 00:12:46.183511 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.183694 kubelet[2635]: E1013 00:12:46.183645 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.183694 kubelet[2635]: W1013 00:12:46.183652 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.183694 kubelet[2635]: E1013 00:12:46.183659 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.184580 kubelet[2635]: E1013 00:12:46.183908 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.184580 kubelet[2635]: W1013 00:12:46.183918 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.184580 kubelet[2635]: E1013 00:12:46.183927 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.193027 kubelet[2635]: E1013 00:12:46.192475 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.193027 kubelet[2635]: W1013 00:12:46.192494 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.193027 kubelet[2635]: E1013 00:12:46.192510 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.193027 kubelet[2635]: I1013 00:12:46.192538 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6p4\" (UniqueName: \"kubernetes.io/projected/ca6273ce-aeab-40b7-bf01-7ab04b5d2d19-kube-api-access-bj6p4\") pod \"csi-node-driver-fxnvf\" (UID: \"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19\") " pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:46.193629 kubelet[2635]: E1013 00:12:46.193555 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.194120 kubelet[2635]: W1013 00:12:46.193922 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.194214 kubelet[2635]: E1013 00:12:46.194198 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.194495 kubelet[2635]: E1013 00:12:46.194472 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.194495 kubelet[2635]: W1013 00:12:46.194492 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.194558 kubelet[2635]: E1013 00:12:46.194505 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.195066 kubelet[2635]: I1013 00:12:46.194641 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca6273ce-aeab-40b7-bf01-7ab04b5d2d19-varrun\") pod \"csi-node-driver-fxnvf\" (UID: \"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19\") " pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:46.195110 kubelet[2635]: E1013 00:12:46.195079 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.195110 kubelet[2635]: W1013 00:12:46.195090 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.195110 kubelet[2635]: E1013 00:12:46.195101 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.195462 kubelet[2635]: E1013 00:12:46.195357 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.195573 kubelet[2635]: W1013 00:12:46.195514 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.196014 kubelet[2635]: E1013 00:12:46.195994 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.196723 kubelet[2635]: E1013 00:12:46.196679 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.196723 kubelet[2635]: W1013 00:12:46.196696 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.196723 kubelet[2635]: E1013 00:12:46.196713 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.197189 kubelet[2635]: I1013 00:12:46.197141 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca6273ce-aeab-40b7-bf01-7ab04b5d2d19-socket-dir\") pod \"csi-node-driver-fxnvf\" (UID: \"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19\") " pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:46.198060 kubelet[2635]: E1013 00:12:46.197811 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.198060 kubelet[2635]: W1013 00:12:46.197829 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.198060 kubelet[2635]: E1013 00:12:46.197842 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.199029 kubelet[2635]: E1013 00:12:46.199006 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.199029 kubelet[2635]: W1013 00:12:46.199022 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.199121 kubelet[2635]: E1013 00:12:46.199049 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.199546 kubelet[2635]: E1013 00:12:46.199524 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.199671 kubelet[2635]: W1013 00:12:46.199560 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.199671 kubelet[2635]: E1013 00:12:46.199597 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.200146 kubelet[2635]: E1013 00:12:46.200086 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.200146 kubelet[2635]: W1013 00:12:46.200125 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.200146 kubelet[2635]: E1013 00:12:46.200140 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.200274 kubelet[2635]: I1013 00:12:46.200172 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca6273ce-aeab-40b7-bf01-7ab04b5d2d19-kubelet-dir\") pod \"csi-node-driver-fxnvf\" (UID: \"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19\") " pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:46.201500 kubelet[2635]: E1013 00:12:46.200380 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.201500 kubelet[2635]: W1013 00:12:46.200392 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.201500 kubelet[2635]: E1013 00:12:46.200403 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.201500 kubelet[2635]: E1013 00:12:46.200782 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.201500 kubelet[2635]: W1013 00:12:46.200793 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.201500 kubelet[2635]: E1013 00:12:46.200811 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.201500 kubelet[2635]: E1013 00:12:46.201173 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.201500 kubelet[2635]: W1013 00:12:46.201184 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.201500 kubelet[2635]: E1013 00:12:46.201231 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.201732 kubelet[2635]: I1013 00:12:46.201253 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca6273ce-aeab-40b7-bf01-7ab04b5d2d19-registration-dir\") pod \"csi-node-driver-fxnvf\" (UID: \"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19\") " pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:46.202324 kubelet[2635]: E1013 00:12:46.202305 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.202359 kubelet[2635]: W1013 00:12:46.202324 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.202359 kubelet[2635]: E1013 00:12:46.202338 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.203288 kubelet[2635]: E1013 00:12:46.203265 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.203288 kubelet[2635]: W1013 00:12:46.203282 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.203379 kubelet[2635]: E1013 00:12:46.203296 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.224226 containerd[1512]: time="2025-10-13T00:12:46.224185345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77894d49c8-rdhts,Uid:dd6b6e39-c234-4cf9-a555-f7f93356f34f,Namespace:calico-system,Attempt:0,} returns sandbox id \"09f1fd9164949076626bec5f9f51ab3db9c6c92653a124fe6048101726b4db22\"" Oct 13 00:12:46.226479 containerd[1512]: time="2025-10-13T00:12:46.226091266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 00:12:46.230317 containerd[1512]: time="2025-10-13T00:12:46.230234230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdqtf,Uid:29774136-4a30-4d6e-8b43-2bc955b37a04,Namespace:calico-system,Attempt:0,}" Oct 13 00:12:46.250127 containerd[1512]: time="2025-10-13T00:12:46.250033526Z" level=info msg="connecting to shim 6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f" address="unix:///run/containerd/s/71249dd4eedb5da24f420b2f2b5ac2c1f810a9f324d7fc1abb0ff22cf0474e45" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:12:46.275656 systemd[1]: Started cri-containerd-6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f.scope - libcontainer container 6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f. Oct 13 00:12:46.301124 containerd[1512]: time="2025-10-13T00:12:46.301074928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdqtf,Uid:29774136-4a30-4d6e-8b43-2bc955b37a04,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\"" Oct 13 00:12:46.301890 kubelet[2635]: E1013 00:12:46.301871 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.301890 kubelet[2635]: W1013 00:12:46.301890 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.302139 kubelet[2635]: E1013 00:12:46.302116 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.302505 kubelet[2635]: E1013 00:12:46.302355 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.302505 kubelet[2635]: W1013 00:12:46.302369 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.302505 kubelet[2635]: E1013 00:12:46.302386 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.303352 kubelet[2635]: E1013 00:12:46.302679 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.303352 kubelet[2635]: W1013 00:12:46.302695 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.303352 kubelet[2635]: E1013 00:12:46.302712 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.303352 kubelet[2635]: E1013 00:12:46.302970 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.303352 kubelet[2635]: W1013 00:12:46.302981 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.303352 kubelet[2635]: E1013 00:12:46.302995 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.303352 kubelet[2635]: E1013 00:12:46.303220 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.303352 kubelet[2635]: W1013 00:12:46.303228 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.303352 kubelet[2635]: E1013 00:12:46.303250 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.303842 kubelet[2635]: E1013 00:12:46.303418 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.303842 kubelet[2635]: W1013 00:12:46.303428 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.304329 kubelet[2635]: E1013 00:12:46.303444 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.304437 kubelet[2635]: E1013 00:12:46.304419 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.304437 kubelet[2635]: W1013 00:12:46.304434 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.304503 kubelet[2635]: E1013 00:12:46.304463 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.304965 kubelet[2635]: E1013 00:12:46.304940 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.304965 kubelet[2635]: W1013 00:12:46.304960 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.305132 kubelet[2635]: E1013 00:12:46.305103 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.306329 kubelet[2635]: E1013 00:12:46.306312 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.306329 kubelet[2635]: W1013 00:12:46.306328 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.306431 kubelet[2635]: E1013 00:12:46.306415 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.307189 kubelet[2635]: E1013 00:12:46.307160 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.307189 kubelet[2635]: W1013 00:12:46.307179 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.307387 kubelet[2635]: E1013 00:12:46.307277 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.307563 kubelet[2635]: E1013 00:12:46.307523 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.307563 kubelet[2635]: W1013 00:12:46.307536 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.307669 kubelet[2635]: E1013 00:12:46.307614 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.307822 kubelet[2635]: E1013 00:12:46.307806 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.307822 kubelet[2635]: W1013 00:12:46.307821 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.307933 kubelet[2635]: E1013 00:12:46.307914 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.308037 kubelet[2635]: E1013 00:12:46.308019 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.308037 kubelet[2635]: W1013 00:12:46.308033 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.308096 kubelet[2635]: E1013 00:12:46.308078 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.308299 kubelet[2635]: E1013 00:12:46.308285 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.308299 kubelet[2635]: W1013 00:12:46.308298 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.308388 kubelet[2635]: E1013 00:12:46.308328 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.308526 kubelet[2635]: E1013 00:12:46.308513 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.308561 kubelet[2635]: W1013 00:12:46.308528 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.308561 kubelet[2635]: E1013 00:12:46.308543 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.308705 kubelet[2635]: E1013 00:12:46.308693 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.308705 kubelet[2635]: W1013 00:12:46.308704 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.308797 kubelet[2635]: E1013 00:12:46.308782 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.308882 kubelet[2635]: E1013 00:12:46.308870 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.308882 kubelet[2635]: W1013 00:12:46.308881 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.308972 kubelet[2635]: E1013 00:12:46.308961 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.309109 kubelet[2635]: E1013 00:12:46.309099 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.309109 kubelet[2635]: W1013 00:12:46.309109 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.309165 kubelet[2635]: E1013 00:12:46.309126 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.309295 kubelet[2635]: E1013 00:12:46.309283 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.309295 kubelet[2635]: W1013 00:12:46.309294 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.309346 kubelet[2635]: E1013 00:12:46.309312 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.310784 kubelet[2635]: E1013 00:12:46.310766 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.310784 kubelet[2635]: W1013 00:12:46.310783 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.310909 kubelet[2635]: E1013 00:12:46.310802 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.310994 kubelet[2635]: E1013 00:12:46.310981 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.311021 kubelet[2635]: W1013 00:12:46.310994 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.311021 kubelet[2635]: E1013 00:12:46.311009 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.311151 kubelet[2635]: E1013 00:12:46.311140 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.311151 kubelet[2635]: W1013 00:12:46.311151 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.311202 kubelet[2635]: E1013 00:12:46.311165 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.311362 kubelet[2635]: E1013 00:12:46.311350 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.311362 kubelet[2635]: W1013 00:12:46.311361 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.311477 kubelet[2635]: E1013 00:12:46.311447 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.311754 kubelet[2635]: E1013 00:12:46.311522 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.311754 kubelet[2635]: W1013 00:12:46.311533 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.311754 kubelet[2635]: E1013 00:12:46.311544 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.312546 kubelet[2635]: E1013 00:12:46.312528 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.312546 kubelet[2635]: W1013 00:12:46.312546 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.312599 kubelet[2635]: E1013 00:12:46.312559 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:46.322299 kubelet[2635]: E1013 00:12:46.322274 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:46.322299 kubelet[2635]: W1013 00:12:46.322294 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:46.322412 kubelet[2635]: E1013 00:12:46.322311 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:47.206215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686777389.mount: Deactivated successfully. Oct 13 00:12:47.478904 kubelet[2635]: E1013 00:12:47.478560 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxnvf" podUID="ca6273ce-aeab-40b7-bf01-7ab04b5d2d19" Oct 13 00:12:47.726876 containerd[1512]: time="2025-10-13T00:12:47.726832783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:47.728427 containerd[1512]: time="2025-10-13T00:12:47.728385064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 00:12:47.729333 containerd[1512]: time="2025-10-13T00:12:47.729235905Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:47.735878 containerd[1512]: time="2025-10-13T00:12:47.735834190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:47.736713 containerd[1512]: time="2025-10-13T00:12:47.736684391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.510534125s" Oct 13 00:12:47.736789 containerd[1512]: time="2025-10-13T00:12:47.736716431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 00:12:47.738191 containerd[1512]: time="2025-10-13T00:12:47.738156632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 00:12:47.757204 containerd[1512]: time="2025-10-13T00:12:47.757155086Z" level=info msg="CreateContainer within sandbox \"09f1fd9164949076626bec5f9f51ab3db9c6c92653a124fe6048101726b4db22\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 00:12:47.763313 containerd[1512]: time="2025-10-13T00:12:47.763261451Z" level=info msg="Container deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:47.769579 containerd[1512]: time="2025-10-13T00:12:47.769520136Z" level=info msg="CreateContainer within sandbox \"09f1fd9164949076626bec5f9f51ab3db9c6c92653a124fe6048101726b4db22\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38\"" Oct 13 00:12:47.770499 containerd[1512]: time="2025-10-13T00:12:47.770094056Z" level=info msg="StartContainer for \"deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38\"" Oct 13 00:12:47.771328 containerd[1512]: time="2025-10-13T00:12:47.771286057Z" level=info msg="connecting to shim deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38" address="unix:///run/containerd/s/2f8f569c873d21efa0d368afee8abf52d15e78a8b12c09a0796d01f09099eaf1" protocol=ttrpc version=3 Oct 13 00:12:47.792648 systemd[1]: Started cri-containerd-deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38.scope - libcontainer container deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38. Oct 13 00:12:47.857322 containerd[1512]: time="2025-10-13T00:12:47.857276764Z" level=info msg="StartContainer for \"deb089b30d747960b71ac9b7ae91bd967eeed9406e0e5332caeaf8838ed1ba38\" returns successfully" Oct 13 00:12:48.567848 kubelet[2635]: I1013 00:12:48.567772 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77894d49c8-rdhts" podStartSLOduration=2.055515438 podStartE2EDuration="3.567737524s" podCreationTimestamp="2025-10-13 00:12:45 +0000 UTC" firstStartedPulling="2025-10-13 00:12:46.225779026 +0000 UTC m=+20.841758362" lastFinishedPulling="2025-10-13 00:12:47.738001112 +0000 UTC m=+22.353980448" observedRunningTime="2025-10-13 00:12:48.567188564 +0000 UTC m=+23.183167900" watchObservedRunningTime="2025-10-13 00:12:48.567737524 +0000 UTC m=+23.183716900" Oct 13 00:12:48.596478 kubelet[2635]: E1013 00:12:48.596434 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.596478 kubelet[2635]: W1013 00:12:48.596469 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.596478 kubelet[2635]: E1013 00:12:48.596490 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.596669 kubelet[2635]: E1013 00:12:48.596655 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597244 kubelet[2635]: W1013 00:12:48.596662 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597244 kubelet[2635]: E1013 00:12:48.596707 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.597244 kubelet[2635]: E1013 00:12:48.596842 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597244 kubelet[2635]: W1013 00:12:48.596849 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597244 kubelet[2635]: E1013 00:12:48.596857 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.597244 kubelet[2635]: E1013 00:12:48.596991 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597244 kubelet[2635]: W1013 00:12:48.596998 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597244 kubelet[2635]: E1013 00:12:48.597005 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.597244 kubelet[2635]: E1013 00:12:48.597125 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597244 kubelet[2635]: W1013 00:12:48.597131 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597138 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597321 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597989 kubelet[2635]: W1013 00:12:48.597331 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597341 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597502 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597989 kubelet[2635]: W1013 00:12:48.597511 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597521 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597655 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.597989 kubelet[2635]: W1013 00:12:48.597661 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.597989 kubelet[2635]: E1013 00:12:48.597668 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.598673 kubelet[2635]: E1013 00:12:48.598277 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.598673 kubelet[2635]: W1013 00:12:48.598289 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.598673 kubelet[2635]: E1013 00:12:48.598301 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.598673 kubelet[2635]: E1013 00:12:48.598438 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.598673 kubelet[2635]: W1013 00:12:48.598445 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.598673 kubelet[2635]: E1013 00:12:48.598472 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.598673 kubelet[2635]: E1013 00:12:48.598607 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.598673 kubelet[2635]: W1013 00:12:48.598615 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.598673 kubelet[2635]: E1013 00:12:48.598622 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.598863 kubelet[2635]: E1013 00:12:48.598738 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.598863 kubelet[2635]: W1013 00:12:48.598745 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.598863 kubelet[2635]: E1013 00:12:48.598760 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.598920 kubelet[2635]: E1013 00:12:48.598893 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.598920 kubelet[2635]: W1013 00:12:48.598901 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.598920 kubelet[2635]: E1013 00:12:48.598909 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.599613 kubelet[2635]: E1013 00:12:48.599034 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.599613 kubelet[2635]: W1013 00:12:48.599044 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.599613 kubelet[2635]: E1013 00:12:48.599051 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.599613 kubelet[2635]: E1013 00:12:48.599173 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.599613 kubelet[2635]: W1013 00:12:48.599179 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.599613 kubelet[2635]: E1013 00:12:48.599185 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.624690 kubelet[2635]: E1013 00:12:48.624655 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.624690 kubelet[2635]: W1013 00:12:48.624684 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.624847 kubelet[2635]: E1013 00:12:48.624703 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.625121 kubelet[2635]: E1013 00:12:48.625098 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.625155 kubelet[2635]: W1013 00:12:48.625138 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.625193 kubelet[2635]: E1013 00:12:48.625159 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.625672 kubelet[2635]: E1013 00:12:48.625653 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.625672 kubelet[2635]: W1013 00:12:48.625669 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.625732 kubelet[2635]: E1013 00:12:48.625691 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.626343 kubelet[2635]: E1013 00:12:48.626311 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.626343 kubelet[2635]: W1013 00:12:48.626329 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.626395 kubelet[2635]: E1013 00:12:48.626346 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.626683 kubelet[2635]: E1013 00:12:48.626655 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.626683 kubelet[2635]: W1013 00:12:48.626672 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.626813 kubelet[2635]: E1013 00:12:48.626789 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.627031 kubelet[2635]: E1013 00:12:48.627013 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.627067 kubelet[2635]: W1013 00:12:48.627030 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.627067 kubelet[2635]: E1013 00:12:48.627055 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.627433 kubelet[2635]: E1013 00:12:48.627412 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.627433 kubelet[2635]: W1013 00:12:48.627430 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.627515 kubelet[2635]: E1013 00:12:48.627494 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.628166 kubelet[2635]: E1013 00:12:48.628148 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.628166 kubelet[2635]: W1013 00:12:48.628166 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.628292 kubelet[2635]: E1013 00:12:48.628254 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.629072 kubelet[2635]: E1013 00:12:48.629033 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.629183 kubelet[2635]: W1013 00:12:48.629161 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.629479 kubelet[2635]: E1013 00:12:48.629345 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.630044 kubelet[2635]: E1013 00:12:48.630002 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.630044 kubelet[2635]: W1013 00:12:48.630020 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.630137 kubelet[2635]: E1013 00:12:48.630054 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.630608 kubelet[2635]: E1013 00:12:48.630591 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.630608 kubelet[2635]: W1013 00:12:48.630605 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.630718 kubelet[2635]: E1013 00:12:48.630693 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.630863 kubelet[2635]: E1013 00:12:48.630844 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.630890 kubelet[2635]: W1013 00:12:48.630866 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.630937 kubelet[2635]: E1013 00:12:48.630924 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.631157 kubelet[2635]: E1013 00:12:48.631139 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.631189 kubelet[2635]: W1013 00:12:48.631158 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.631189 kubelet[2635]: E1013 00:12:48.631175 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.631393 kubelet[2635]: E1013 00:12:48.631378 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.631424 kubelet[2635]: W1013 00:12:48.631413 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.631462 kubelet[2635]: E1013 00:12:48.631427 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.633710 kubelet[2635]: E1013 00:12:48.633534 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.633710 kubelet[2635]: W1013 00:12:48.633555 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.633710 kubelet[2635]: E1013 00:12:48.633578 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.633991 kubelet[2635]: E1013 00:12:48.633973 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.634717 kubelet[2635]: W1013 00:12:48.634693 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.634933 kubelet[2635]: E1013 00:12:48.634893 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.635257 kubelet[2635]: E1013 00:12:48.635037 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.635370 kubelet[2635]: W1013 00:12:48.635352 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.635502 kubelet[2635]: E1013 00:12:48.635442 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.636173 kubelet[2635]: E1013 00:12:48.636151 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:12:48.636173 kubelet[2635]: W1013 00:12:48.636171 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:12:48.636256 kubelet[2635]: E1013 00:12:48.636186 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:12:48.786433 containerd[1512]: time="2025-10-13T00:12:48.786382842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:48.787204 containerd[1512]: time="2025-10-13T00:12:48.787168923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 00:12:48.788444 containerd[1512]: time="2025-10-13T00:12:48.788081363Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:48.790429 containerd[1512]: time="2025-10-13T00:12:48.790401685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:48.791542 containerd[1512]: time="2025-10-13T00:12:48.791513526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.053322974s" Oct 13 00:12:48.791658 containerd[1512]: time="2025-10-13T00:12:48.791638966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 00:12:48.793885 containerd[1512]: time="2025-10-13T00:12:48.793852567Z" level=info msg="CreateContainer within sandbox \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 00:12:48.802781 containerd[1512]: time="2025-10-13T00:12:48.802733374Z" level=info msg="Container 78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:48.810271 containerd[1512]: time="2025-10-13T00:12:48.810230859Z" level=info msg="CreateContainer within sandbox \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\"" Oct 13 00:12:48.810714 containerd[1512]: time="2025-10-13T00:12:48.810690540Z" level=info msg="StartContainer for \"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\"" Oct 13 00:12:48.812664 containerd[1512]: time="2025-10-13T00:12:48.812573261Z" level=info msg="connecting to shim 78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067" address="unix:///run/containerd/s/71249dd4eedb5da24f420b2f2b5ac2c1f810a9f324d7fc1abb0ff22cf0474e45" protocol=ttrpc version=3 Oct 13 00:12:48.837718 systemd[1]: Started cri-containerd-78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067.scope - libcontainer container 78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067. Oct 13 00:12:48.890695 systemd[1]: cri-containerd-78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067.scope: Deactivated successfully. Oct 13 00:12:48.918544 containerd[1512]: time="2025-10-13T00:12:48.918470618Z" level=info msg="StartContainer for \"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\" returns successfully" Oct 13 00:12:48.937670 containerd[1512]: time="2025-10-13T00:12:48.937629151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\" id:\"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\" pid:3345 exited_at:{seconds:1760314368 nanos:936898551}" Oct 13 00:12:48.943334 containerd[1512]: time="2025-10-13T00:12:48.943289595Z" level=info msg="received exit event container_id:\"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\" id:\"78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067\" pid:3345 exited_at:{seconds:1760314368 nanos:936898551}" Oct 13 00:12:48.969018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78260f47c4f00487b47023fde43ae8c1cf54b742b2a949eea6a9ddb2a621d067-rootfs.mount: Deactivated successfully. Oct 13 00:12:49.478475 kubelet[2635]: E1013 00:12:49.478260 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxnvf" podUID="ca6273ce-aeab-40b7-bf01-7ab04b5d2d19" Oct 13 00:12:49.559887 kubelet[2635]: I1013 00:12:49.559851 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:12:49.561849 containerd[1512]: time="2025-10-13T00:12:49.561806217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 00:12:51.479538 kubelet[2635]: E1013 00:12:51.478476 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxnvf" podUID="ca6273ce-aeab-40b7-bf01-7ab04b5d2d19" Oct 13 00:12:53.183017 containerd[1512]: time="2025-10-13T00:12:53.182968239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:53.183653 containerd[1512]: time="2025-10-13T00:12:53.183618760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 00:12:53.184280 containerd[1512]: time="2025-10-13T00:12:53.184225120Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:53.186145 containerd[1512]: time="2025-10-13T00:12:53.186075321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:53.186899 containerd[1512]: time="2025-10-13T00:12:53.186854801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.625008864s" Oct 13 00:12:53.186899 containerd[1512]: time="2025-10-13T00:12:53.186894641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 00:12:53.190400 containerd[1512]: time="2025-10-13T00:12:53.190364803Z" level=info msg="CreateContainer within sandbox \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 00:12:53.199486 containerd[1512]: time="2025-10-13T00:12:53.198592327Z" level=info msg="Container cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:53.206315 containerd[1512]: time="2025-10-13T00:12:53.206271771Z" level=info msg="CreateContainer within sandbox \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\"" Oct 13 00:12:53.207065 containerd[1512]: time="2025-10-13T00:12:53.207033532Z" level=info msg="StartContainer for \"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\"" Oct 13 00:12:53.208819 containerd[1512]: time="2025-10-13T00:12:53.208762013Z" level=info msg="connecting to shim cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0" address="unix:///run/containerd/s/71249dd4eedb5da24f420b2f2b5ac2c1f810a9f324d7fc1abb0ff22cf0474e45" protocol=ttrpc version=3 Oct 13 00:12:53.231644 systemd[1]: Started cri-containerd-cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0.scope - libcontainer container cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0. Oct 13 00:12:53.341262 containerd[1512]: time="2025-10-13T00:12:53.341216442Z" level=info msg="StartContainer for \"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\" returns successfully" Oct 13 00:12:53.478308 kubelet[2635]: E1013 00:12:53.478187 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxnvf" podUID="ca6273ce-aeab-40b7-bf01-7ab04b5d2d19" Oct 13 00:12:53.838751 systemd[1]: cri-containerd-cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0.scope: Deactivated successfully. Oct 13 00:12:53.839032 systemd[1]: cri-containerd-cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0.scope: Consumed 489ms CPU time, 177.4M memory peak, 4K read from disk, 165.8M written to disk. Oct 13 00:12:53.844045 containerd[1512]: time="2025-10-13T00:12:53.843997185Z" level=info msg="received exit event container_id:\"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\" id:\"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\" pid:3407 exited_at:{seconds:1760314373 nanos:843791065}" Oct 13 00:12:53.844225 containerd[1512]: time="2025-10-13T00:12:53.844185825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\" id:\"cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0\" pid:3407 exited_at:{seconds:1760314373 nanos:843791065}" Oct 13 00:12:53.851601 kubelet[2635]: I1013 00:12:53.851566 2635 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 00:12:53.872953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cae8252b5b811705d8d6fe1bec4f671c587e0f7914940f0fbe2314325ebb11f0-rootfs.mount: Deactivated successfully. Oct 13 00:12:53.898787 systemd[1]: Created slice kubepods-besteffort-pod242f3567_93e6_42a0_8b82_79c71138aef5.slice - libcontainer container kubepods-besteffort-pod242f3567_93e6_42a0_8b82_79c71138aef5.slice. Oct 13 00:12:53.911565 systemd[1]: Created slice kubepods-besteffort-pod9aa87cc4_2dfb_4665_a9b1_839086b3077d.slice - libcontainer container kubepods-besteffort-pod9aa87cc4_2dfb_4665_a9b1_839086b3077d.slice. Oct 13 00:12:53.919550 systemd[1]: Created slice kubepods-besteffort-pod73c8a3ff_e15d_439f_9d29_a5b1fc677e1f.slice - libcontainer container kubepods-besteffort-pod73c8a3ff_e15d_439f_9d29_a5b1fc677e1f.slice. Oct 13 00:12:53.926993 systemd[1]: Created slice kubepods-burstable-podd66a808f_8235_4b75_8811_149375c28468.slice - libcontainer container kubepods-burstable-podd66a808f_8235_4b75_8811_149375c28468.slice. Oct 13 00:12:53.933916 systemd[1]: Created slice kubepods-besteffort-podbfa8bdf3_2c21_4aff_90c9_b67d11b2e1a5.slice - libcontainer container kubepods-besteffort-podbfa8bdf3_2c21_4aff_90c9_b67d11b2e1a5.slice. Oct 13 00:12:53.939410 systemd[1]: Created slice kubepods-burstable-pod6e709049_f7ad_4bd4_8ec0_f8ac241876b5.slice - libcontainer container kubepods-burstable-pod6e709049_f7ad_4bd4_8ec0_f8ac241876b5.slice. Oct 13 00:12:53.945290 systemd[1]: Created slice kubepods-besteffort-pod24b47750_fdac_404a_9fc7_ef10b2e0d4a9.slice - libcontainer container kubepods-besteffort-pod24b47750_fdac_404a_9fc7_ef10b2e0d4a9.slice. Oct 13 00:12:53.963085 kubelet[2635]: I1013 00:12:53.963027 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/242f3567-93e6-42a0-8b82-79c71138aef5-calico-apiserver-certs\") pod \"calico-apiserver-7c44845d57-4zq68\" (UID: \"242f3567-93e6-42a0-8b82-79c71138aef5\") " pod="calico-apiserver/calico-apiserver-7c44845d57-4zq68" Oct 13 00:12:53.963085 kubelet[2635]: I1013 00:12:53.963078 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkbs\" (UniqueName: \"kubernetes.io/projected/73c8a3ff-e15d-439f-9d29-a5b1fc677e1f-kube-api-access-fkkbs\") pod \"goldmane-54d579b49d-b2b62\" (UID: \"73c8a3ff-e15d-439f-9d29-a5b1fc677e1f\") " pod="calico-system/goldmane-54d579b49d-b2b62" Oct 13 00:12:53.963085 kubelet[2635]: I1013 00:12:53.963096 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-ca-bundle\") pod \"whisker-5597f5674f-vs466\" (UID: \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\") " pod="calico-system/whisker-5597f5674f-vs466" Oct 13 00:12:53.963302 kubelet[2635]: I1013 00:12:53.963118 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e709049-f7ad-4bd4-8ec0-f8ac241876b5-config-volume\") pod \"coredns-668d6bf9bc-k8qms\" (UID: \"6e709049-f7ad-4bd4-8ec0-f8ac241876b5\") " pod="kube-system/coredns-668d6bf9bc-k8qms" Oct 13 00:12:53.963302 kubelet[2635]: I1013 00:12:53.963138 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c8a3ff-e15d-439f-9d29-a5b1fc677e1f-config\") pod \"goldmane-54d579b49d-b2b62\" (UID: \"73c8a3ff-e15d-439f-9d29-a5b1fc677e1f\") " pod="calico-system/goldmane-54d579b49d-b2b62" Oct 13 00:12:53.963302 kubelet[2635]: I1013 00:12:53.963153 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6smq\" (UniqueName: \"kubernetes.io/projected/6e709049-f7ad-4bd4-8ec0-f8ac241876b5-kube-api-access-v6smq\") pod \"coredns-668d6bf9bc-k8qms\" (UID: \"6e709049-f7ad-4bd4-8ec0-f8ac241876b5\") " pod="kube-system/coredns-668d6bf9bc-k8qms" Oct 13 00:12:53.963302 kubelet[2635]: I1013 00:12:53.963168 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbm7\" (UniqueName: \"kubernetes.io/projected/24b47750-fdac-404a-9fc7-ef10b2e0d4a9-kube-api-access-5dbm7\") pod \"calico-apiserver-7c44845d57-qzw75\" (UID: \"24b47750-fdac-404a-9fc7-ef10b2e0d4a9\") " pod="calico-apiserver/calico-apiserver-7c44845d57-qzw75" Oct 13 00:12:53.963302 kubelet[2635]: I1013 00:12:53.963186 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/73c8a3ff-e15d-439f-9d29-a5b1fc677e1f-goldmane-key-pair\") pod \"goldmane-54d579b49d-b2b62\" (UID: \"73c8a3ff-e15d-439f-9d29-a5b1fc677e1f\") " pod="calico-system/goldmane-54d579b49d-b2b62" Oct 13 00:12:53.963411 kubelet[2635]: I1013 00:12:53.963205 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzp4\" (UniqueName: \"kubernetes.io/projected/9aa87cc4-2dfb-4665-a9b1-839086b3077d-kube-api-access-mqzp4\") pod \"calico-kube-controllers-56bc948cb8-hp6bt\" (UID: \"9aa87cc4-2dfb-4665-a9b1-839086b3077d\") " pod="calico-system/calico-kube-controllers-56bc948cb8-hp6bt" Oct 13 00:12:53.963411 kubelet[2635]: I1013 00:12:53.963225 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqw7\" (UniqueName: \"kubernetes.io/projected/242f3567-93e6-42a0-8b82-79c71138aef5-kube-api-access-2kqw7\") pod \"calico-apiserver-7c44845d57-4zq68\" (UID: \"242f3567-93e6-42a0-8b82-79c71138aef5\") " pod="calico-apiserver/calico-apiserver-7c44845d57-4zq68" Oct 13 00:12:53.963411 kubelet[2635]: I1013 00:12:53.963240 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73c8a3ff-e15d-439f-9d29-a5b1fc677e1f-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-b2b62\" (UID: \"73c8a3ff-e15d-439f-9d29-a5b1fc677e1f\") " pod="calico-system/goldmane-54d579b49d-b2b62" Oct 13 00:12:53.963411 kubelet[2635]: I1013 00:12:53.963256 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnttf\" (UniqueName: \"kubernetes.io/projected/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-kube-api-access-lnttf\") pod \"whisker-5597f5674f-vs466\" (UID: \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\") " pod="calico-system/whisker-5597f5674f-vs466" Oct 13 00:12:53.963411 kubelet[2635]: I1013 00:12:53.963273 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d66a808f-8235-4b75-8811-149375c28468-config-volume\") pod \"coredns-668d6bf9bc-trtw6\" (UID: \"d66a808f-8235-4b75-8811-149375c28468\") " pod="kube-system/coredns-668d6bf9bc-trtw6" Oct 13 00:12:53.963546 kubelet[2635]: I1013 00:12:53.963288 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdprz\" (UniqueName: \"kubernetes.io/projected/d66a808f-8235-4b75-8811-149375c28468-kube-api-access-fdprz\") pod \"coredns-668d6bf9bc-trtw6\" (UID: \"d66a808f-8235-4b75-8811-149375c28468\") " pod="kube-system/coredns-668d6bf9bc-trtw6" Oct 13 00:12:53.963546 kubelet[2635]: I1013 00:12:53.963322 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-backend-key-pair\") pod \"whisker-5597f5674f-vs466\" (UID: \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\") " pod="calico-system/whisker-5597f5674f-vs466" Oct 13 00:12:53.963546 kubelet[2635]: I1013 00:12:53.963347 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24b47750-fdac-404a-9fc7-ef10b2e0d4a9-calico-apiserver-certs\") pod \"calico-apiserver-7c44845d57-qzw75\" (UID: \"24b47750-fdac-404a-9fc7-ef10b2e0d4a9\") " pod="calico-apiserver/calico-apiserver-7c44845d57-qzw75" Oct 13 00:12:53.963546 kubelet[2635]: I1013 00:12:53.963366 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa87cc4-2dfb-4665-a9b1-839086b3077d-tigera-ca-bundle\") pod \"calico-kube-controllers-56bc948cb8-hp6bt\" (UID: \"9aa87cc4-2dfb-4665-a9b1-839086b3077d\") " pod="calico-system/calico-kube-controllers-56bc948cb8-hp6bt" Oct 13 00:12:54.203588 containerd[1512]: time="2025-10-13T00:12:54.203479007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-4zq68,Uid:242f3567-93e6-42a0-8b82-79c71138aef5,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:12:54.217380 containerd[1512]: time="2025-10-13T00:12:54.217317814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bc948cb8-hp6bt,Uid:9aa87cc4-2dfb-4665-a9b1-839086b3077d,Namespace:calico-system,Attempt:0,}" Oct 13 00:12:54.226620 containerd[1512]: time="2025-10-13T00:12:54.226306058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b2b62,Uid:73c8a3ff-e15d-439f-9d29-a5b1fc677e1f,Namespace:calico-system,Attempt:0,}" Oct 13 00:12:54.232007 containerd[1512]: time="2025-10-13T00:12:54.231974061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trtw6,Uid:d66a808f-8235-4b75-8811-149375c28468,Namespace:kube-system,Attempt:0,}" Oct 13 00:12:54.237317 containerd[1512]: time="2025-10-13T00:12:54.237261983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5597f5674f-vs466,Uid:bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5,Namespace:calico-system,Attempt:0,}" Oct 13 00:12:54.243677 containerd[1512]: time="2025-10-13T00:12:54.243610667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qms,Uid:6e709049-f7ad-4bd4-8ec0-f8ac241876b5,Namespace:kube-system,Attempt:0,}" Oct 13 00:12:54.253992 containerd[1512]: time="2025-10-13T00:12:54.253920312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-qzw75,Uid:24b47750-fdac-404a-9fc7-ef10b2e0d4a9,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:12:54.332595 containerd[1512]: time="2025-10-13T00:12:54.332547110Z" level=error msg="Failed to destroy network for sandbox \"a868cae64db7f82e7bef82409873ddd4d34b82cce747c6ecdb97d69db8dc1331\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.336527 containerd[1512]: time="2025-10-13T00:12:54.336060392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trtw6,Uid:d66a808f-8235-4b75-8811-149375c28468,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a868cae64db7f82e7bef82409873ddd4d34b82cce747c6ecdb97d69db8dc1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.337279 kubelet[2635]: E1013 00:12:54.336877 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a868cae64db7f82e7bef82409873ddd4d34b82cce747c6ecdb97d69db8dc1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.337279 kubelet[2635]: E1013 00:12:54.336959 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a868cae64db7f82e7bef82409873ddd4d34b82cce747c6ecdb97d69db8dc1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-trtw6" Oct 13 00:12:54.337279 kubelet[2635]: E1013 00:12:54.336978 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a868cae64db7f82e7bef82409873ddd4d34b82cce747c6ecdb97d69db8dc1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-trtw6" Oct 13 00:12:54.337404 kubelet[2635]: E1013 00:12:54.337026 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-trtw6_kube-system(d66a808f-8235-4b75-8811-149375c28468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-trtw6_kube-system(d66a808f-8235-4b75-8811-149375c28468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a868cae64db7f82e7bef82409873ddd4d34b82cce747c6ecdb97d69db8dc1331\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-trtw6" podUID="d66a808f-8235-4b75-8811-149375c28468" Oct 13 00:12:54.342731 containerd[1512]: time="2025-10-13T00:12:54.342692115Z" level=error msg="Failed to destroy network for sandbox \"5b99d3e57c612ba251eb2cd9e102aa819053fb3e73bbba45febc40abcc266fd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.343050 containerd[1512]: time="2025-10-13T00:12:54.343008355Z" level=error msg="Failed to destroy network for sandbox \"130f6f8594c8428a1e35ea46f7b04227c0c86dab7f843253ab7f7d1aae852357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.343834 containerd[1512]: time="2025-10-13T00:12:54.343798356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b2b62,Uid:73c8a3ff-e15d-439f-9d29-a5b1fc677e1f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b99d3e57c612ba251eb2cd9e102aa819053fb3e73bbba45febc40abcc266fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.344289 kubelet[2635]: E1013 00:12:54.344138 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b99d3e57c612ba251eb2cd9e102aa819053fb3e73bbba45febc40abcc266fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.344522 kubelet[2635]: E1013 00:12:54.344422 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b99d3e57c612ba251eb2cd9e102aa819053fb3e73bbba45febc40abcc266fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-b2b62" Oct 13 00:12:54.344668 kubelet[2635]: E1013 00:12:54.344644 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b99d3e57c612ba251eb2cd9e102aa819053fb3e73bbba45febc40abcc266fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-b2b62" Oct 13 00:12:54.345154 kubelet[2635]: E1013 00:12:54.344921 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-b2b62_calico-system(73c8a3ff-e15d-439f-9d29-a5b1fc677e1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-b2b62_calico-system(73c8a3ff-e15d-439f-9d29-a5b1fc677e1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b99d3e57c612ba251eb2cd9e102aa819053fb3e73bbba45febc40abcc266fd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-b2b62" podUID="73c8a3ff-e15d-439f-9d29-a5b1fc677e1f" Oct 13 00:12:54.345305 containerd[1512]: time="2025-10-13T00:12:54.345261236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-4zq68,Uid:242f3567-93e6-42a0-8b82-79c71138aef5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"130f6f8594c8428a1e35ea46f7b04227c0c86dab7f843253ab7f7d1aae852357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.346516 kubelet[2635]: E1013 00:12:54.346483 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130f6f8594c8428a1e35ea46f7b04227c0c86dab7f843253ab7f7d1aae852357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.346595 kubelet[2635]: E1013 00:12:54.346528 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130f6f8594c8428a1e35ea46f7b04227c0c86dab7f843253ab7f7d1aae852357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44845d57-4zq68" Oct 13 00:12:54.346595 kubelet[2635]: E1013 00:12:54.346545 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130f6f8594c8428a1e35ea46f7b04227c0c86dab7f843253ab7f7d1aae852357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44845d57-4zq68" Oct 13 00:12:54.346679 kubelet[2635]: E1013 00:12:54.346580 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c44845d57-4zq68_calico-apiserver(242f3567-93e6-42a0-8b82-79c71138aef5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c44845d57-4zq68_calico-apiserver(242f3567-93e6-42a0-8b82-79c71138aef5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"130f6f8594c8428a1e35ea46f7b04227c0c86dab7f843253ab7f7d1aae852357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c44845d57-4zq68" podUID="242f3567-93e6-42a0-8b82-79c71138aef5" Oct 13 00:12:54.358306 containerd[1512]: time="2025-10-13T00:12:54.358222683Z" level=error msg="Failed to destroy network for sandbox \"ad748782a93fed5320bf9d70aa7a51743d0b0f14aaf90e90c37613a538676816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.362329 containerd[1512]: time="2025-10-13T00:12:54.362272485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bc948cb8-hp6bt,Uid:9aa87cc4-2dfb-4665-a9b1-839086b3077d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad748782a93fed5320bf9d70aa7a51743d0b0f14aaf90e90c37613a538676816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.362984 kubelet[2635]: E1013 00:12:54.362525 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad748782a93fed5320bf9d70aa7a51743d0b0f14aaf90e90c37613a538676816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.362984 kubelet[2635]: E1013 00:12:54.362579 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad748782a93fed5320bf9d70aa7a51743d0b0f14aaf90e90c37613a538676816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56bc948cb8-hp6bt" Oct 13 00:12:54.362984 kubelet[2635]: E1013 00:12:54.362601 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad748782a93fed5320bf9d70aa7a51743d0b0f14aaf90e90c37613a538676816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56bc948cb8-hp6bt" Oct 13 00:12:54.363080 kubelet[2635]: E1013 00:12:54.362636 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56bc948cb8-hp6bt_calico-system(9aa87cc4-2dfb-4665-a9b1-839086b3077d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56bc948cb8-hp6bt_calico-system(9aa87cc4-2dfb-4665-a9b1-839086b3077d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad748782a93fed5320bf9d70aa7a51743d0b0f14aaf90e90c37613a538676816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56bc948cb8-hp6bt" podUID="9aa87cc4-2dfb-4665-a9b1-839086b3077d" Oct 13 00:12:54.365432 containerd[1512]: time="2025-10-13T00:12:54.364542966Z" level=error msg="Failed to destroy network for sandbox \"7bb1767668b5d0a9f47f941747d2cc7c67401ecbd2404ca213afbab0e83a1219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.365586 containerd[1512]: time="2025-10-13T00:12:54.365366126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-qzw75,Uid:24b47750-fdac-404a-9fc7-ef10b2e0d4a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb1767668b5d0a9f47f941747d2cc7c67401ecbd2404ca213afbab0e83a1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.365995 kubelet[2635]: E1013 00:12:54.365931 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb1767668b5d0a9f47f941747d2cc7c67401ecbd2404ca213afbab0e83a1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.366183 kubelet[2635]: E1013 00:12:54.366081 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb1767668b5d0a9f47f941747d2cc7c67401ecbd2404ca213afbab0e83a1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44845d57-qzw75" Oct 13 00:12:54.366183 kubelet[2635]: E1013 00:12:54.366106 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb1767668b5d0a9f47f941747d2cc7c67401ecbd2404ca213afbab0e83a1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44845d57-qzw75" Oct 13 00:12:54.366183 kubelet[2635]: E1013 00:12:54.366145 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c44845d57-qzw75_calico-apiserver(24b47750-fdac-404a-9fc7-ef10b2e0d4a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c44845d57-qzw75_calico-apiserver(24b47750-fdac-404a-9fc7-ef10b2e0d4a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bb1767668b5d0a9f47f941747d2cc7c67401ecbd2404ca213afbab0e83a1219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c44845d57-qzw75" podUID="24b47750-fdac-404a-9fc7-ef10b2e0d4a9" Oct 13 00:12:54.370465 containerd[1512]: time="2025-10-13T00:12:54.370415409Z" level=error msg="Failed to destroy network for sandbox \"d7af145daef5d7526a40f9a5231531e4044ae3ed94537e8719b2fa39693e6b58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.371413 containerd[1512]: time="2025-10-13T00:12:54.371368449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5597f5674f-vs466,Uid:bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7af145daef5d7526a40f9a5231531e4044ae3ed94537e8719b2fa39693e6b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.371591 kubelet[2635]: E1013 00:12:54.371556 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7af145daef5d7526a40f9a5231531e4044ae3ed94537e8719b2fa39693e6b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.371653 kubelet[2635]: E1013 00:12:54.371606 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7af145daef5d7526a40f9a5231531e4044ae3ed94537e8719b2fa39693e6b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5597f5674f-vs466" Oct 13 00:12:54.371653 kubelet[2635]: E1013 00:12:54.371626 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7af145daef5d7526a40f9a5231531e4044ae3ed94537e8719b2fa39693e6b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5597f5674f-vs466" Oct 13 00:12:54.371708 kubelet[2635]: E1013 00:12:54.371663 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5597f5674f-vs466_calico-system(bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5597f5674f-vs466_calico-system(bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7af145daef5d7526a40f9a5231531e4044ae3ed94537e8719b2fa39693e6b58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5597f5674f-vs466" podUID="bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5" Oct 13 00:12:54.374952 containerd[1512]: time="2025-10-13T00:12:54.374912731Z" level=error msg="Failed to destroy network for sandbox \"be670716f4eb41062fecfbbfe9fec2981d8397679d137a0144f61f9e8b696801\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.375918 containerd[1512]: time="2025-10-13T00:12:54.375881612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qms,Uid:6e709049-f7ad-4bd4-8ec0-f8ac241876b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be670716f4eb41062fecfbbfe9fec2981d8397679d137a0144f61f9e8b696801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.376236 kubelet[2635]: E1013 00:12:54.376137 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be670716f4eb41062fecfbbfe9fec2981d8397679d137a0144f61f9e8b696801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:54.378118 kubelet[2635]: E1013 00:12:54.376185 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be670716f4eb41062fecfbbfe9fec2981d8397679d137a0144f61f9e8b696801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k8qms" Oct 13 00:12:54.378186 kubelet[2635]: E1013 00:12:54.378119 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be670716f4eb41062fecfbbfe9fec2981d8397679d137a0144f61f9e8b696801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k8qms" Oct 13 00:12:54.378220 kubelet[2635]: E1013 00:12:54.378175 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k8qms_kube-system(6e709049-f7ad-4bd4-8ec0-f8ac241876b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k8qms_kube-system(6e709049-f7ad-4bd4-8ec0-f8ac241876b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be670716f4eb41062fecfbbfe9fec2981d8397679d137a0144f61f9e8b696801\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k8qms" podUID="6e709049-f7ad-4bd4-8ec0-f8ac241876b5" Oct 13 00:12:54.577988 containerd[1512]: time="2025-10-13T00:12:54.577933631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 00:12:55.200222 systemd[1]: run-netns-cni\x2d071d4b6c\x2d9fe6\x2d0f97\x2db8b5\x2de5a711394e10.mount: Deactivated successfully. Oct 13 00:12:55.200309 systemd[1]: run-netns-cni\x2d7c72eba6\x2dd907\x2dc1c6\x2dee7c\x2d629b06ce2ff1.mount: Deactivated successfully. Oct 13 00:12:55.200353 systemd[1]: run-netns-cni\x2d2ef923c1\x2d578a\x2da009\x2d7009\x2d1e0b2f654eb9.mount: Deactivated successfully. Oct 13 00:12:55.200396 systemd[1]: run-netns-cni\x2d65e06c43\x2d5722\x2d27a0\x2df1e5\x2db371312d6815.mount: Deactivated successfully. Oct 13 00:12:55.488805 systemd[1]: Created slice kubepods-besteffort-podca6273ce_aeab_40b7_bf01_7ab04b5d2d19.slice - libcontainer container kubepods-besteffort-podca6273ce_aeab_40b7_bf01_7ab04b5d2d19.slice. Oct 13 00:12:55.492833 containerd[1512]: time="2025-10-13T00:12:55.492685185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxnvf,Uid:ca6273ce-aeab-40b7-bf01-7ab04b5d2d19,Namespace:calico-system,Attempt:0,}" Oct 13 00:12:55.557728 containerd[1512]: time="2025-10-13T00:12:55.555656494Z" level=error msg="Failed to destroy network for sandbox \"249bed7720ea11e638c104db8209100e593b07e8bc49b9fcbc33169b9355f83d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:55.557533 systemd[1]: run-netns-cni\x2d1bad45ad\x2d18f7\x2dfb04\x2def8e\x2d2286df42dc45.mount: Deactivated successfully. Oct 13 00:12:55.560146 containerd[1512]: time="2025-10-13T00:12:55.559801735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxnvf,Uid:ca6273ce-aeab-40b7-bf01-7ab04b5d2d19,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"249bed7720ea11e638c104db8209100e593b07e8bc49b9fcbc33169b9355f83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:55.560266 kubelet[2635]: E1013 00:12:55.560071 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249bed7720ea11e638c104db8209100e593b07e8bc49b9fcbc33169b9355f83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:12:55.560266 kubelet[2635]: E1013 00:12:55.560126 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249bed7720ea11e638c104db8209100e593b07e8bc49b9fcbc33169b9355f83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:55.560266 kubelet[2635]: E1013 00:12:55.560145 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"249bed7720ea11e638c104db8209100e593b07e8bc49b9fcbc33169b9355f83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxnvf" Oct 13 00:12:55.560574 kubelet[2635]: E1013 00:12:55.560183 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxnvf_calico-system(ca6273ce-aeab-40b7-bf01-7ab04b5d2d19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxnvf_calico-system(ca6273ce-aeab-40b7-bf01-7ab04b5d2d19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"249bed7720ea11e638c104db8209100e593b07e8bc49b9fcbc33169b9355f83d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxnvf" podUID="ca6273ce-aeab-40b7-bf01-7ab04b5d2d19" Oct 13 00:12:58.507446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989981083.mount: Deactivated successfully. Oct 13 00:12:58.776532 containerd[1512]: time="2025-10-13T00:12:58.770252906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 00:12:58.776532 containerd[1512]: time="2025-10-13T00:12:58.773758667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.195783716s" Oct 13 00:12:58.776532 containerd[1512]: time="2025-10-13T00:12:58.776471188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 00:12:58.776532 containerd[1512]: time="2025-10-13T00:12:58.775338348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:58.777074 containerd[1512]: time="2025-10-13T00:12:58.777031308Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:58.777495 containerd[1512]: time="2025-10-13T00:12:58.777472108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:12:58.786366 containerd[1512]: time="2025-10-13T00:12:58.786330432Z" level=info msg="CreateContainer within sandbox \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 00:12:58.801485 containerd[1512]: time="2025-10-13T00:12:58.800207157Z" level=info msg="Container 9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:12:58.811096 containerd[1512]: time="2025-10-13T00:12:58.811041961Z" level=info msg="CreateContainer within sandbox \"6c972f93827d994a09a040dc36b7b3a5accc02716b3a687a46091b67147b6a4f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4\"" Oct 13 00:12:58.811570 containerd[1512]: time="2025-10-13T00:12:58.811548281Z" level=info msg="StartContainer for \"9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4\"" Oct 13 00:12:58.814277 containerd[1512]: time="2025-10-13T00:12:58.814224042Z" level=info msg="connecting to shim 9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4" address="unix:///run/containerd/s/71249dd4eedb5da24f420b2f2b5ac2c1f810a9f324d7fc1abb0ff22cf0474e45" protocol=ttrpc version=3 Oct 13 00:12:58.835637 systemd[1]: Started cri-containerd-9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4.scope - libcontainer container 9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4. Oct 13 00:12:58.871594 containerd[1512]: time="2025-10-13T00:12:58.871555304Z" level=info msg="StartContainer for \"9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4\" returns successfully" Oct 13 00:12:59.001186 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 00:12:59.001322 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 00:12:59.201027 kubelet[2635]: I1013 00:12:59.200973 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnttf\" (UniqueName: \"kubernetes.io/projected/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-kube-api-access-lnttf\") pod \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\" (UID: \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\") " Oct 13 00:12:59.202022 kubelet[2635]: I1013 00:12:59.201132 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-ca-bundle\") pod \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\" (UID: \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\") " Oct 13 00:12:59.202022 kubelet[2635]: I1013 00:12:59.201432 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-backend-key-pair\") pod \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\" (UID: \"bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5\") " Oct 13 00:12:59.221395 kubelet[2635]: I1013 00:12:59.221324 2635 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5" (UID: "bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:12:59.221657 kubelet[2635]: I1013 00:12:59.221628 2635 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-kube-api-access-lnttf" (OuterVolumeSpecName: "kube-api-access-lnttf") pod "bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5" (UID: "bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5"). InnerVolumeSpecName "kube-api-access-lnttf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:12:59.223190 kubelet[2635]: I1013 00:12:59.223149 2635 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5" (UID: "bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:12:59.302648 kubelet[2635]: I1013 00:12:59.302584 2635 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnttf\" (UniqueName: \"kubernetes.io/projected/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-kube-api-access-lnttf\") on node \"localhost\" DevicePath \"\"" Oct 13 00:12:59.302648 kubelet[2635]: I1013 00:12:59.302618 2635 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 00:12:59.302648 kubelet[2635]: I1013 00:12:59.302627 2635 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 00:12:59.493904 systemd[1]: Removed slice kubepods-besteffort-podbfa8bdf3_2c21_4aff_90c9_b67d11b2e1a5.slice - libcontainer container kubepods-besteffort-podbfa8bdf3_2c21_4aff_90c9_b67d11b2e1a5.slice. Oct 13 00:12:59.508069 systemd[1]: var-lib-kubelet-pods-bfa8bdf3\x2d2c21\x2d4aff\x2d90c9\x2db67d11b2e1a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlnttf.mount: Deactivated successfully. Oct 13 00:12:59.508157 systemd[1]: var-lib-kubelet-pods-bfa8bdf3\x2d2c21\x2d4aff\x2d90c9\x2db67d11b2e1a5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 00:12:59.630344 kubelet[2635]: I1013 00:12:59.630274 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vdqtf" podStartSLOduration=2.154885478 podStartE2EDuration="14.630257457s" podCreationTimestamp="2025-10-13 00:12:45 +0000 UTC" firstStartedPulling="2025-10-13 00:12:46.302244209 +0000 UTC m=+20.918223545" lastFinishedPulling="2025-10-13 00:12:58.777616228 +0000 UTC m=+33.393595524" observedRunningTime="2025-10-13 00:12:59.618023572 +0000 UTC m=+34.234002908" watchObservedRunningTime="2025-10-13 00:12:59.630257457 +0000 UTC m=+34.246236753" Oct 13 00:12:59.671999 systemd[1]: Created slice kubepods-besteffort-pod014008f0_139f_4665_b80f_e31d90e2050b.slice - libcontainer container kubepods-besteffort-pod014008f0_139f_4665_b80f_e31d90e2050b.slice. Oct 13 00:12:59.706279 kubelet[2635]: I1013 00:12:59.706209 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/014008f0-139f-4665-b80f-e31d90e2050b-whisker-backend-key-pair\") pod \"whisker-6d46988c7b-2ktbz\" (UID: \"014008f0-139f-4665-b80f-e31d90e2050b\") " pod="calico-system/whisker-6d46988c7b-2ktbz" Oct 13 00:12:59.706279 kubelet[2635]: I1013 00:12:59.706257 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbsn7\" (UniqueName: \"kubernetes.io/projected/014008f0-139f-4665-b80f-e31d90e2050b-kube-api-access-qbsn7\") pod \"whisker-6d46988c7b-2ktbz\" (UID: \"014008f0-139f-4665-b80f-e31d90e2050b\") " pod="calico-system/whisker-6d46988c7b-2ktbz" Oct 13 00:12:59.706468 kubelet[2635]: I1013 00:12:59.706319 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/014008f0-139f-4665-b80f-e31d90e2050b-whisker-ca-bundle\") pod \"whisker-6d46988c7b-2ktbz\" (UID: \"014008f0-139f-4665-b80f-e31d90e2050b\") " pod="calico-system/whisker-6d46988c7b-2ktbz" Oct 13 00:12:59.978638 containerd[1512]: time="2025-10-13T00:12:59.978583821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d46988c7b-2ktbz,Uid:014008f0-139f-4665-b80f-e31d90e2050b,Namespace:calico-system,Attempt:0,}" Oct 13 00:13:00.159946 systemd-networkd[1429]: cali368d72d35f7: Link UP Oct 13 00:13:00.160098 systemd-networkd[1429]: cali368d72d35f7: Gained carrier Oct 13 00:13:00.177104 containerd[1512]: 2025-10-13 00:13:00.031 [INFO][3784] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:13:00.177104 containerd[1512]: 2025-10-13 00:13:00.060 [INFO][3784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d46988c7b--2ktbz-eth0 whisker-6d46988c7b- calico-system 014008f0-139f-4665-b80f-e31d90e2050b 887 0 2025-10-13 00:12:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d46988c7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d46988c7b-2ktbz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali368d72d35f7 [] [] }} ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-" Oct 13 00:13:00.177104 containerd[1512]: 2025-10-13 00:13:00.060 [INFO][3784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.177104 containerd[1512]: 2025-10-13 00:13:00.112 [INFO][3798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" HandleID="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Workload="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.112 [INFO][3798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" HandleID="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Workload="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d46988c7b-2ktbz", "timestamp":"2025-10-13 00:13:00.112268346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.112 [INFO][3798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.112 [INFO][3798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.113 [INFO][3798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.123 [INFO][3798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" host="localhost" Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.128 [INFO][3798] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.132 [INFO][3798] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.135 [INFO][3798] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.137 [INFO][3798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:00.177308 containerd[1512]: 2025-10-13 00:13:00.137 [INFO][3798] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" host="localhost" Oct 13 00:13:00.177532 containerd[1512]: 2025-10-13 00:13:00.139 [INFO][3798] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31 Oct 13 00:13:00.177532 containerd[1512]: 2025-10-13 00:13:00.142 [INFO][3798] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" host="localhost" Oct 13 00:13:00.177532 containerd[1512]: 2025-10-13 00:13:00.151 [INFO][3798] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" host="localhost" Oct 13 00:13:00.177532 containerd[1512]: 2025-10-13 00:13:00.151 [INFO][3798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" host="localhost" Oct 13 00:13:00.177532 containerd[1512]: 2025-10-13 00:13:00.151 [INFO][3798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:00.177532 containerd[1512]: 2025-10-13 00:13:00.151 [INFO][3798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" HandleID="k8s-pod-network.d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Workload="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.177654 containerd[1512]: 2025-10-13 00:13:00.154 [INFO][3784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d46988c7b--2ktbz-eth0", GenerateName:"whisker-6d46988c7b-", Namespace:"calico-system", SelfLink:"", UID:"014008f0-139f-4665-b80f-e31d90e2050b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d46988c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d46988c7b-2ktbz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali368d72d35f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:00.177654 containerd[1512]: 2025-10-13 00:13:00.154 [INFO][3784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.177729 containerd[1512]: 2025-10-13 00:13:00.154 [INFO][3784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali368d72d35f7 ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.177729 containerd[1512]: 2025-10-13 00:13:00.162 [INFO][3784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.177770 containerd[1512]: 2025-10-13 00:13:00.163 [INFO][3784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d46988c7b--2ktbz-eth0", GenerateName:"whisker-6d46988c7b-", Namespace:"calico-system", SelfLink:"", UID:"014008f0-139f-4665-b80f-e31d90e2050b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d46988c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31", Pod:"whisker-6d46988c7b-2ktbz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali368d72d35f7", MAC:"9e:7f:3b:90:15:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:00.177818 containerd[1512]: 2025-10-13 00:13:00.173 [INFO][3784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" Namespace="calico-system" Pod="whisker-6d46988c7b-2ktbz" WorkloadEndpoint="localhost-k8s-whisker--6d46988c7b--2ktbz-eth0" Oct 13 00:13:00.224192 containerd[1512]: time="2025-10-13T00:13:00.224148023Z" level=info msg="connecting to shim d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31" address="unix:///run/containerd/s/1349d2e1771c8e78ad2f84bfe795b034293f41b969dfacf1bd7f9ec5668e8472" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:00.247620 systemd[1]: Started cri-containerd-d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31.scope - libcontainer container d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31. Oct 13 00:13:00.258854 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:00.277722 containerd[1512]: time="2025-10-13T00:13:00.277675561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d46988c7b-2ktbz,Uid:014008f0-139f-4665-b80f-e31d90e2050b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31\"" Oct 13 00:13:00.278994 containerd[1512]: time="2025-10-13T00:13:00.278960761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 00:13:00.601151 kubelet[2635]: I1013 00:13:00.601103 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:01.498276 kubelet[2635]: I1013 00:13:01.498230 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5" path="/var/lib/kubelet/pods/bfa8bdf3-2c21-4aff-90c9-b67d11b2e1a5/volumes" Oct 13 00:13:01.623947 containerd[1512]: time="2025-10-13T00:13:01.623537676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:01.624345 containerd[1512]: time="2025-10-13T00:13:01.624299076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Oct 13 00:13:01.624899 containerd[1512]: time="2025-10-13T00:13:01.624869797Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:01.627241 containerd[1512]: time="2025-10-13T00:13:01.627211037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:01.627852 containerd[1512]: time="2025-10-13T00:13:01.627823718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.348832957s" Oct 13 00:13:01.627904 containerd[1512]: time="2025-10-13T00:13:01.627851958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Oct 13 00:13:01.633178 containerd[1512]: time="2025-10-13T00:13:01.633150719Z" level=info msg="CreateContainer within sandbox \"d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 00:13:01.643485 containerd[1512]: time="2025-10-13T00:13:01.642942562Z" level=info msg="Container 51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:01.648866 containerd[1512]: time="2025-10-13T00:13:01.648827044Z" level=info msg="CreateContainer within sandbox \"d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee\"" Oct 13 00:13:01.649489 containerd[1512]: time="2025-10-13T00:13:01.649437644Z" level=info msg="StartContainer for \"51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee\"" Oct 13 00:13:01.650757 containerd[1512]: time="2025-10-13T00:13:01.650730125Z" level=info msg="connecting to shim 51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee" address="unix:///run/containerd/s/1349d2e1771c8e78ad2f84bfe795b034293f41b969dfacf1bd7f9ec5668e8472" protocol=ttrpc version=3 Oct 13 00:13:01.669614 systemd[1]: Started cri-containerd-51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee.scope - libcontainer container 51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee. Oct 13 00:13:01.701325 containerd[1512]: time="2025-10-13T00:13:01.701289540Z" level=info msg="StartContainer for \"51fbc4d3040facc9bf2bb53c0faad418eb55437e6149e2dba7c4c4ecbf5aafee\" returns successfully" Oct 13 00:13:01.702255 containerd[1512]: time="2025-10-13T00:13:01.702162581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 00:13:01.734654 systemd-networkd[1429]: cali368d72d35f7: Gained IPv6LL Oct 13 00:13:03.204826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486431543.mount: Deactivated successfully. Oct 13 00:13:03.240410 containerd[1512]: time="2025-10-13T00:13:03.240220313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:03.241218 containerd[1512]: time="2025-10-13T00:13:03.241193353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Oct 13 00:13:03.242234 containerd[1512]: time="2025-10-13T00:13:03.242211353Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:03.244241 containerd[1512]: time="2025-10-13T00:13:03.244196434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:03.244829 containerd[1512]: time="2025-10-13T00:13:03.244798834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.542591093s" Oct 13 00:13:03.244888 containerd[1512]: time="2025-10-13T00:13:03.244831194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Oct 13 00:13:03.247827 containerd[1512]: time="2025-10-13T00:13:03.247324115Z" level=info msg="CreateContainer within sandbox \"d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 00:13:03.256495 containerd[1512]: time="2025-10-13T00:13:03.255730717Z" level=info msg="Container 1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:03.259097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513447026.mount: Deactivated successfully. Oct 13 00:13:03.268051 containerd[1512]: time="2025-10-13T00:13:03.267957880Z" level=info msg="CreateContainer within sandbox \"d0f54fd16beaaa0011ae8447823d2253b4d3e58680902e8ffcb7c196ac199c31\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce\"" Oct 13 00:13:03.269650 containerd[1512]: time="2025-10-13T00:13:03.269629201Z" level=info msg="StartContainer for \"1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce\"" Oct 13 00:13:03.271051 containerd[1512]: time="2025-10-13T00:13:03.271006441Z" level=info msg="connecting to shim 1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce" address="unix:///run/containerd/s/1349d2e1771c8e78ad2f84bfe795b034293f41b969dfacf1bd7f9ec5668e8472" protocol=ttrpc version=3 Oct 13 00:13:03.292608 systemd[1]: Started cri-containerd-1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce.scope - libcontainer container 1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce. Oct 13 00:13:03.323876 containerd[1512]: time="2025-10-13T00:13:03.323838816Z" level=info msg="StartContainer for \"1d19e4c077b5d663e2007afe02cbe5d8bf7904172063b41ecf99f5b3215bf6ce\" returns successfully" Oct 13 00:13:03.625933 kubelet[2635]: I1013 00:13:03.625876 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d46988c7b-2ktbz" podStartSLOduration=1.6587237049999999 podStartE2EDuration="4.625857338s" podCreationTimestamp="2025-10-13 00:12:59 +0000 UTC" firstStartedPulling="2025-10-13 00:13:00.278728681 +0000 UTC m=+34.894707977" lastFinishedPulling="2025-10-13 00:13:03.245862274 +0000 UTC m=+37.861841610" observedRunningTime="2025-10-13 00:13:03.625046458 +0000 UTC m=+38.241025794" watchObservedRunningTime="2025-10-13 00:13:03.625857338 +0000 UTC m=+38.241836674" Oct 13 00:13:03.806512 kubelet[2635]: I1013 00:13:03.806237 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:04.793511 systemd-networkd[1429]: vxlan.calico: Link UP Oct 13 00:13:04.793518 systemd-networkd[1429]: vxlan.calico: Gained carrier Oct 13 00:13:05.479475 containerd[1512]: time="2025-10-13T00:13:05.479276974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b2b62,Uid:73c8a3ff-e15d-439f-9d29-a5b1fc677e1f,Namespace:calico-system,Attempt:0,}" Oct 13 00:13:05.604490 systemd-networkd[1429]: calia4a071789f9: Link UP Oct 13 00:13:05.605299 systemd-networkd[1429]: calia4a071789f9: Gained carrier Oct 13 00:13:05.629279 containerd[1512]: 2025-10-13 00:13:05.527 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--b2b62-eth0 goldmane-54d579b49d- calico-system 73c8a3ff-e15d-439f-9d29-a5b1fc677e1f 822 0 2025-10-13 00:12:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-b2b62 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia4a071789f9 [] [] }} ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-" Oct 13 00:13:05.629279 containerd[1512]: 2025-10-13 00:13:05.527 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.629279 containerd[1512]: 2025-10-13 00:13:05.560 [INFO][4276] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" HandleID="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Workload="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.560 [INFO][4276] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" HandleID="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Workload="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-b2b62", "timestamp":"2025-10-13 00:13:05.559982114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.561 [INFO][4276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.561 [INFO][4276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.561 [INFO][4276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.570 [INFO][4276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" host="localhost" Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.575 [INFO][4276] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.580 [INFO][4276] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.581 [INFO][4276] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.584 [INFO][4276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:05.629942 containerd[1512]: 2025-10-13 00:13:05.584 [INFO][4276] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" host="localhost" Oct 13 00:13:05.631590 containerd[1512]: 2025-10-13 00:13:05.586 [INFO][4276] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9 Oct 13 00:13:05.631590 containerd[1512]: 2025-10-13 00:13:05.590 [INFO][4276] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" host="localhost" Oct 13 00:13:05.631590 containerd[1512]: 2025-10-13 00:13:05.596 [INFO][4276] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" host="localhost" Oct 13 00:13:05.631590 containerd[1512]: 2025-10-13 00:13:05.596 [INFO][4276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" host="localhost" Oct 13 00:13:05.631590 containerd[1512]: 2025-10-13 00:13:05.597 [INFO][4276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:05.631590 containerd[1512]: 2025-10-13 00:13:05.597 [INFO][4276] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" HandleID="k8s-pod-network.7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Workload="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.631734 containerd[1512]: 2025-10-13 00:13:05.601 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--b2b62-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"73c8a3ff-e15d-439f-9d29-a5b1fc677e1f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-b2b62", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia4a071789f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:05.631734 containerd[1512]: 2025-10-13 00:13:05.601 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.631809 containerd[1512]: 2025-10-13 00:13:05.601 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a071789f9 ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.631809 containerd[1512]: 2025-10-13 00:13:05.605 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.631890 containerd[1512]: 2025-10-13 00:13:05.606 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--b2b62-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"73c8a3ff-e15d-439f-9d29-a5b1fc677e1f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9", Pod:"goldmane-54d579b49d-b2b62", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia4a071789f9", MAC:"be:d1:e5:89:67:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:05.631940 containerd[1512]: 2025-10-13 00:13:05.624 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" Namespace="calico-system" Pod="goldmane-54d579b49d-b2b62" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b2b62-eth0" Oct 13 00:13:05.653100 containerd[1512]: time="2025-10-13T00:13:05.652637216Z" level=info msg="connecting to shim 7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9" address="unix:///run/containerd/s/68f076464c3e1a7459b89530f2bbb98c3b7252d47e888a839cbd274d3b494b09" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:05.676617 systemd[1]: Started cri-containerd-7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9.scope - libcontainer container 7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9. Oct 13 00:13:05.688200 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:05.709543 containerd[1512]: time="2025-10-13T00:13:05.709486710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b2b62,Uid:73c8a3ff-e15d-439f-9d29-a5b1fc677e1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9\"" Oct 13 00:13:05.718620 containerd[1512]: time="2025-10-13T00:13:05.718587512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 00:13:06.278806 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL Oct 13 00:13:06.382595 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:54696.service - OpenSSH per-connection server daemon (10.0.0.1:54696). Oct 13 00:13:06.455017 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 54696 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:06.456555 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:06.460447 systemd-logind[1484]: New session 8 of user core. Oct 13 00:13:06.467596 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 00:13:06.637524 sshd[4348]: Connection closed by 10.0.0.1 port 54696 Oct 13 00:13:06.638197 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:06.642547 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:54696.service: Deactivated successfully. Oct 13 00:13:06.644732 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 00:13:06.645748 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Oct 13 00:13:06.647402 systemd-logind[1484]: Removed session 8. Oct 13 00:13:06.854591 systemd-networkd[1429]: calia4a071789f9: Gained IPv6LL Oct 13 00:13:07.250837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756674521.mount: Deactivated successfully. Oct 13 00:13:07.480116 containerd[1512]: time="2025-10-13T00:13:07.480050668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trtw6,Uid:d66a808f-8235-4b75-8811-149375c28468,Namespace:kube-system,Attempt:0,}" Oct 13 00:13:07.480602 containerd[1512]: time="2025-10-13T00:13:07.480074308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-4zq68,Uid:242f3567-93e6-42a0-8b82-79c71138aef5,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:13:07.596340 containerd[1512]: time="2025-10-13T00:13:07.596298173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:07.596943 containerd[1512]: time="2025-10-13T00:13:07.596915053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 00:13:07.597845 containerd[1512]: time="2025-10-13T00:13:07.597824613Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:07.600783 containerd[1512]: time="2025-10-13T00:13:07.600523053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:07.600783 containerd[1512]: time="2025-10-13T00:13:07.600671853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.882052821s" Oct 13 00:13:07.600783 containerd[1512]: time="2025-10-13T00:13:07.600709653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 00:13:07.611360 systemd-networkd[1429]: calib6555a2fd3f: Link UP Oct 13 00:13:07.613117 systemd-networkd[1429]: calib6555a2fd3f: Gained carrier Oct 13 00:13:07.614428 containerd[1512]: time="2025-10-13T00:13:07.614224296Z" level=info msg="CreateContainer within sandbox \"7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 00:13:07.625402 containerd[1512]: time="2025-10-13T00:13:07.625355259Z" level=info msg="Container b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:07.630466 containerd[1512]: 2025-10-13 00:13:07.534 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--trtw6-eth0 coredns-668d6bf9bc- kube-system d66a808f-8235-4b75-8811-149375c28468 824 0 2025-10-13 00:12:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-trtw6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6555a2fd3f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-" Oct 13 00:13:07.630466 containerd[1512]: 2025-10-13 00:13:07.534 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.630466 containerd[1512]: 2025-10-13 00:13:07.564 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" HandleID="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Workload="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.564 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" HandleID="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Workload="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012ee70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-trtw6", "timestamp":"2025-10-13 00:13:07.564540046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.564 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.564 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.564 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.574 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" host="localhost" Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.578 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.585 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.587 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.590 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:07.630843 containerd[1512]: 2025-10-13 00:13:07.590 [INFO][4408] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" host="localhost" Oct 13 00:13:07.632051 containerd[1512]: 2025-10-13 00:13:07.591 [INFO][4408] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae Oct 13 00:13:07.632051 containerd[1512]: 2025-10-13 00:13:07.594 [INFO][4408] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" host="localhost" Oct 13 00:13:07.632051 containerd[1512]: 2025-10-13 00:13:07.602 [INFO][4408] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" host="localhost" Oct 13 00:13:07.632051 containerd[1512]: 2025-10-13 00:13:07.602 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" host="localhost" Oct 13 00:13:07.632051 containerd[1512]: 2025-10-13 00:13:07.602 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:07.632051 containerd[1512]: 2025-10-13 00:13:07.602 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" HandleID="k8s-pod-network.344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Workload="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.632203 containerd[1512]: 2025-10-13 00:13:07.606 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--trtw6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d66a808f-8235-4b75-8811-149375c28468", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-trtw6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6555a2fd3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:07.632275 containerd[1512]: 2025-10-13 00:13:07.606 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.632275 containerd[1512]: 2025-10-13 00:13:07.606 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6555a2fd3f ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.632275 containerd[1512]: 2025-10-13 00:13:07.613 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.632339 containerd[1512]: 2025-10-13 00:13:07.616 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--trtw6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d66a808f-8235-4b75-8811-149375c28468", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae", Pod:"coredns-668d6bf9bc-trtw6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6555a2fd3f", MAC:"72:98:dc:e4:e5:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:07.632339 containerd[1512]: 2025-10-13 00:13:07.628 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-trtw6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--trtw6-eth0" Oct 13 00:13:07.636978 containerd[1512]: time="2025-10-13T00:13:07.636873501Z" level=info msg="CreateContainer within sandbox \"7178456e63e15ce15df30e6da3d912d7c8dbc20d930a3ce50c1a4c852f9c8ed9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\"" Oct 13 00:13:07.638289 containerd[1512]: time="2025-10-13T00:13:07.638227261Z" level=info msg="StartContainer for \"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\"" Oct 13 00:13:07.641911 containerd[1512]: time="2025-10-13T00:13:07.641854102Z" level=info msg="connecting to shim b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62" address="unix:///run/containerd/s/68f076464c3e1a7459b89530f2bbb98c3b7252d47e888a839cbd274d3b494b09" protocol=ttrpc version=3 Oct 13 00:13:07.660274 containerd[1512]: time="2025-10-13T00:13:07.660235706Z" level=info msg="connecting to shim 344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae" address="unix:///run/containerd/s/6a4c69664e2ae72851dc09e0d87af5b78acd948a1f3356dc45a8ef94d37995f9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:07.660623 systemd[1]: Started cri-containerd-b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62.scope - libcontainer container b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62. Oct 13 00:13:07.701600 systemd[1]: Started cri-containerd-344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae.scope - libcontainer container 344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae. Oct 13 00:13:07.711193 containerd[1512]: time="2025-10-13T00:13:07.711157717Z" level=info msg="StartContainer for \"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\" returns successfully" Oct 13 00:13:07.716759 systemd-networkd[1429]: calif3b53135673: Link UP Oct 13 00:13:07.717806 systemd-networkd[1429]: calif3b53135673: Gained carrier Oct 13 00:13:07.725964 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.528 [INFO][4372] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0 calico-apiserver-7c44845d57- calico-apiserver 242f3567-93e6-42a0-8b82-79c71138aef5 816 0 2025-10-13 00:12:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c44845d57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c44845d57-4zq68 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif3b53135673 [] [] }} ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.528 [INFO][4372] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.566 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" HandleID="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Workload="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.566 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" HandleID="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Workload="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c44845d57-4zq68", "timestamp":"2025-10-13 00:13:07.566395846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.566 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.602 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.602 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.675 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.681 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.686 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.688 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.692 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.692 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.694 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8 Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.699 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.706 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.706 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" host="localhost" Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.706 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:07.732304 containerd[1512]: 2025-10-13 00:13:07.706 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" HandleID="k8s-pod-network.476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Workload="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.733050 containerd[1512]: 2025-10-13 00:13:07.710 [INFO][4372] cni-plugin/k8s.go 418: Populated endpoint ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0", GenerateName:"calico-apiserver-7c44845d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"242f3567-93e6-42a0-8b82-79c71138aef5", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44845d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c44845d57-4zq68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3b53135673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:07.733050 containerd[1512]: 2025-10-13 00:13:07.711 [INFO][4372] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.733050 containerd[1512]: 2025-10-13 00:13:07.711 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3b53135673 ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.733050 containerd[1512]: 2025-10-13 00:13:07.717 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.733050 containerd[1512]: 2025-10-13 00:13:07.718 [INFO][4372] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0", GenerateName:"calico-apiserver-7c44845d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"242f3567-93e6-42a0-8b82-79c71138aef5", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44845d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8", Pod:"calico-apiserver-7c44845d57-4zq68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3b53135673", MAC:"ea:66:f1:60:cf:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:07.733050 containerd[1512]: 2025-10-13 00:13:07.726 [INFO][4372] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-4zq68" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--4zq68-eth0" Oct 13 00:13:07.756703 containerd[1512]: time="2025-10-13T00:13:07.756650847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trtw6,Uid:d66a808f-8235-4b75-8811-149375c28468,Namespace:kube-system,Attempt:0,} returns sandbox id \"344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae\"" Oct 13 00:13:07.758229 containerd[1512]: time="2025-10-13T00:13:07.758191487Z" level=info msg="connecting to shim 476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8" address="unix:///run/containerd/s/7d150bf1fa5f092af72d2b94c0e7a66863e1aae0fc925d0d345504161cd7b04c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:07.760287 containerd[1512]: time="2025-10-13T00:13:07.760242447Z" level=info msg="CreateContainer within sandbox \"344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:13:07.771478 containerd[1512]: time="2025-10-13T00:13:07.771427730Z" level=info msg="Container 0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:07.779793 containerd[1512]: time="2025-10-13T00:13:07.779735771Z" level=info msg="CreateContainer within sandbox \"344ae2a8541df04561508a286a4ec6acd5a82c3a66dba808f119c481d01898ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525\"" Oct 13 00:13:07.781211 containerd[1512]: time="2025-10-13T00:13:07.781188332Z" level=info msg="StartContainer for \"0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525\"" Oct 13 00:13:07.782753 containerd[1512]: time="2025-10-13T00:13:07.782699332Z" level=info msg="connecting to shim 0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525" address="unix:///run/containerd/s/6a4c69664e2ae72851dc09e0d87af5b78acd948a1f3356dc45a8ef94d37995f9" protocol=ttrpc version=3 Oct 13 00:13:07.788121 systemd[1]: Started cri-containerd-476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8.scope - libcontainer container 476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8. Oct 13 00:13:07.801896 systemd[1]: Started cri-containerd-0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525.scope - libcontainer container 0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525. Oct 13 00:13:07.806019 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:07.835708 containerd[1512]: time="2025-10-13T00:13:07.835656503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-4zq68,Uid:242f3567-93e6-42a0-8b82-79c71138aef5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8\"" Oct 13 00:13:07.837832 containerd[1512]: time="2025-10-13T00:13:07.837704704Z" level=info msg="StartContainer for \"0c276a567e4dc9e959306308b634bbe704c0e5e6ab227eea5cb59dc409af3525\" returns successfully" Oct 13 00:13:07.837832 containerd[1512]: time="2025-10-13T00:13:07.837738304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:13:08.479384 containerd[1512]: time="2025-10-13T00:13:08.479321433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bc948cb8-hp6bt,Uid:9aa87cc4-2dfb-4665-a9b1-839086b3077d,Namespace:calico-system,Attempt:0,}" Oct 13 00:13:08.479591 containerd[1512]: time="2025-10-13T00:13:08.479343513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qms,Uid:6e709049-f7ad-4bd4-8ec0-f8ac241876b5,Namespace:kube-system,Attempt:0,}" Oct 13 00:13:08.479591 containerd[1512]: time="2025-10-13T00:13:08.479346793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-qzw75,Uid:24b47750-fdac-404a-9fc7-ef10b2e0d4a9,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:13:08.655288 systemd-networkd[1429]: caliaf712aa9213: Link UP Oct 13 00:13:08.655502 systemd-networkd[1429]: caliaf712aa9213: Gained carrier Oct 13 00:13:08.669506 kubelet[2635]: I1013 00:13:08.668502 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-b2b62" podStartSLOduration=20.774170967 podStartE2EDuration="22.668485191s" podCreationTimestamp="2025-10-13 00:12:46 +0000 UTC" firstStartedPulling="2025-10-13 00:13:05.71103455 +0000 UTC m=+40.327013886" lastFinishedPulling="2025-10-13 00:13:07.605348774 +0000 UTC m=+42.221328110" observedRunningTime="2025-10-13 00:13:08.657486149 +0000 UTC m=+43.273465485" watchObservedRunningTime="2025-10-13 00:13:08.668485191 +0000 UTC m=+43.284464527" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.544 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0 calico-kube-controllers-56bc948cb8- calico-system 9aa87cc4-2dfb-4665-a9b1-839086b3077d 819 0 2025-10-13 00:12:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56bc948cb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56bc948cb8-hp6bt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaf712aa9213 [] [] }} ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.544 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.591 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" HandleID="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Workload="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.591 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" HandleID="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Workload="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033d490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56bc948cb8-hp6bt", "timestamp":"2025-10-13 00:13:08.591373616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.591 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.591 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.591 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.609 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.615 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.619 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.621 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.624 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.624 [INFO][4652] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.626 [INFO][4652] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736 Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.631 [INFO][4652] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.643 [INFO][4652] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.644 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" host="localhost" Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.644 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:08.686146 containerd[1512]: 2025-10-13 00:13:08.644 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" HandleID="k8s-pod-network.0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Workload="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.687675 containerd[1512]: 2025-10-13 00:13:08.650 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0", GenerateName:"calico-kube-controllers-56bc948cb8-", Namespace:"calico-system", SelfLink:"", UID:"9aa87cc4-2dfb-4665-a9b1-839086b3077d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56bc948cb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56bc948cb8-hp6bt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaf712aa9213", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:08.687675 containerd[1512]: 2025-10-13 00:13:08.650 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.687675 containerd[1512]: 2025-10-13 00:13:08.651 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf712aa9213 ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.687675 containerd[1512]: 2025-10-13 00:13:08.655 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.687675 containerd[1512]: 2025-10-13 00:13:08.659 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0", GenerateName:"calico-kube-controllers-56bc948cb8-", Namespace:"calico-system", SelfLink:"", UID:"9aa87cc4-2dfb-4665-a9b1-839086b3077d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56bc948cb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736", Pod:"calico-kube-controllers-56bc948cb8-hp6bt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaf712aa9213", MAC:"46:2c:85:66:37:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:08.687675 containerd[1512]: 2025-10-13 00:13:08.674 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" Namespace="calico-system" Pod="calico-kube-controllers-56bc948cb8-hp6bt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bc948cb8--hp6bt-eth0" Oct 13 00:13:08.717386 containerd[1512]: time="2025-10-13T00:13:08.717341481Z" level=info msg="connecting to shim 0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736" address="unix:///run/containerd/s/8b019400991b2becf996ce67b69847e377b1495295332d661a1d9f44cf8ad86f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:08.747833 systemd[1]: Started cri-containerd-0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736.scope - libcontainer container 0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736. Oct 13 00:13:08.760423 systemd-networkd[1429]: cali88409b4c697: Link UP Oct 13 00:13:08.760878 systemd-networkd[1429]: cali88409b4c697: Gained carrier Oct 13 00:13:08.779911 kubelet[2635]: I1013 00:13:08.779617 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-trtw6" podStartSLOduration=36.779585333 podStartE2EDuration="36.779585333s" podCreationTimestamp="2025-10-13 00:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:13:08.679028393 +0000 UTC m=+43.295007769" watchObservedRunningTime="2025-10-13 00:13:08.779585333 +0000 UTC m=+43.395564669" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.537 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0 calico-apiserver-7c44845d57- calico-apiserver 24b47750-fdac-404a-9fc7-ef10b2e0d4a9 823 0 2025-10-13 00:12:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c44845d57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c44845d57-qzw75 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88409b4c697 [] [] }} ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.537 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.593 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" HandleID="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Workload="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.594 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" HandleID="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Workload="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c44845d57-qzw75", "timestamp":"2025-10-13 00:13:08.591548056 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.596 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.645 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.645 [INFO][4645] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.711 [INFO][4645] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.717 [INFO][4645] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.726 [INFO][4645] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.728 [INFO][4645] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.732 [INFO][4645] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.732 [INFO][4645] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.734 [INFO][4645] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4 Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.738 [INFO][4645] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.749 [INFO][4645] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.749 [INFO][4645] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" host="localhost" Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.750 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:08.784397 containerd[1512]: 2025-10-13 00:13:08.750 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" HandleID="k8s-pod-network.80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Workload="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.785507 containerd[1512]: 2025-10-13 00:13:08.758 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0", GenerateName:"calico-apiserver-7c44845d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"24b47750-fdac-404a-9fc7-ef10b2e0d4a9", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44845d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c44845d57-qzw75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88409b4c697", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:08.785507 containerd[1512]: 2025-10-13 00:13:08.758 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.785507 containerd[1512]: 2025-10-13 00:13:08.758 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88409b4c697 ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.785507 containerd[1512]: 2025-10-13 00:13:08.762 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.785507 containerd[1512]: 2025-10-13 00:13:08.764 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0", GenerateName:"calico-apiserver-7c44845d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"24b47750-fdac-404a-9fc7-ef10b2e0d4a9", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44845d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4", Pod:"calico-apiserver-7c44845d57-qzw75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88409b4c697", MAC:"86:82:33:b7:9e:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:08.785507 containerd[1512]: 2025-10-13 00:13:08.778 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" Namespace="calico-apiserver" Pod="calico-apiserver-7c44845d57-qzw75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c44845d57--qzw75-eth0" Oct 13 00:13:08.787405 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:08.810887 containerd[1512]: time="2025-10-13T00:13:08.810659899Z" level=info msg="connecting to shim 80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4" address="unix:///run/containerd/s/aaf99f950c8b29d88a26f3310e4b5fc58b6be2931c60723f229237bdf057dfe2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:08.834476 containerd[1512]: time="2025-10-13T00:13:08.834366264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bc948cb8-hp6bt,Uid:9aa87cc4-2dfb-4665-a9b1-839086b3077d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736\"" Oct 13 00:13:08.854611 systemd[1]: Started cri-containerd-80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4.scope - libcontainer container 80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4. Oct 13 00:13:08.873327 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:08.874634 systemd-networkd[1429]: calib1780dbab75: Link UP Oct 13 00:13:08.877741 systemd-networkd[1429]: calib1780dbab75: Gained carrier Oct 13 00:13:08.877952 containerd[1512]: time="2025-10-13T00:13:08.877724073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\" id:\"959d8cda0e6f6a8506268b7f67f18a075093d237189fdbd7502aa9459fa686e6\" pid:4721 exit_status:1 exited_at:{seconds:1760314388 nanos:875569152}" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.566 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--k8qms-eth0 coredns-668d6bf9bc- kube-system 6e709049-f7ad-4bd4-8ec0-f8ac241876b5 825 0 2025-10-13 00:12:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-k8qms eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1780dbab75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.567 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.614 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" HandleID="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Workload="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.614 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" HandleID="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Workload="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000513140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-k8qms", "timestamp":"2025-10-13 00:13:08.61421518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.614 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.750 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.750 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.813 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.825 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.835 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.840 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.848 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.848 [INFO][4660] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.850 [INFO][4660] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217 Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.856 [INFO][4660] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.864 [INFO][4660] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.864 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" host="localhost" Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.864 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:08.899871 containerd[1512]: 2025-10-13 00:13:08.864 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" HandleID="k8s-pod-network.43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Workload="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.901622 containerd[1512]: 2025-10-13 00:13:08.868 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k8qms-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6e709049-f7ad-4bd4-8ec0-f8ac241876b5", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-k8qms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1780dbab75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:08.901622 containerd[1512]: 2025-10-13 00:13:08.869 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.901622 containerd[1512]: 2025-10-13 00:13:08.869 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1780dbab75 ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.901622 containerd[1512]: 2025-10-13 00:13:08.878 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.901622 containerd[1512]: 2025-10-13 00:13:08.878 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k8qms-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6e709049-f7ad-4bd4-8ec0-f8ac241876b5", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217", Pod:"coredns-668d6bf9bc-k8qms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1780dbab75", MAC:"fa:55:dd:0e:5c:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:08.901622 containerd[1512]: 2025-10-13 00:13:08.894 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8qms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k8qms-eth0" Oct 13 00:13:08.927104 containerd[1512]: time="2025-10-13T00:13:08.926884642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44845d57-qzw75,Uid:24b47750-fdac-404a-9fc7-ef10b2e0d4a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4\"" Oct 13 00:13:08.943426 containerd[1512]: time="2025-10-13T00:13:08.943368286Z" level=info msg="connecting to shim 43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217" address="unix:///run/containerd/s/14aa1103af3322c3b4c1ffc5951a05f58258f33d31d116a65f27edcdfa900344" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:08.971651 systemd[1]: Started cri-containerd-43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217.scope - libcontainer container 43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217. Oct 13 00:13:08.982893 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:09.057480 containerd[1512]: time="2025-10-13T00:13:09.057206348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qms,Uid:6e709049-f7ad-4bd4-8ec0-f8ac241876b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217\"" Oct 13 00:13:09.060990 containerd[1512]: time="2025-10-13T00:13:09.060960428Z" level=info msg="CreateContainer within sandbox \"43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:13:09.069877 containerd[1512]: time="2025-10-13T00:13:09.069844550Z" level=info msg="Container 65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:09.075419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361661747.mount: Deactivated successfully. Oct 13 00:13:09.081187 containerd[1512]: time="2025-10-13T00:13:09.081135592Z" level=info msg="CreateContainer within sandbox \"43aa6b72eefc994d9fd29117e13b194434b74c31bcd9330fa73ea95fe83b0217\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59\"" Oct 13 00:13:09.081927 containerd[1512]: time="2025-10-13T00:13:09.081865592Z" level=info msg="StartContainer for \"65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59\"" Oct 13 00:13:09.082945 containerd[1512]: time="2025-10-13T00:13:09.082911672Z" level=info msg="connecting to shim 65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59" address="unix:///run/containerd/s/14aa1103af3322c3b4c1ffc5951a05f58258f33d31d116a65f27edcdfa900344" protocol=ttrpc version=3 Oct 13 00:13:09.111842 systemd[1]: Started cri-containerd-65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59.scope - libcontainer container 65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59. Oct 13 00:13:09.157733 systemd-networkd[1429]: calib6555a2fd3f: Gained IPv6LL Oct 13 00:13:09.171862 containerd[1512]: time="2025-10-13T00:13:09.171742969Z" level=info msg="StartContainer for \"65461096fd97997c14bbc6985d8e8b4fb3458c2e48ab9004e828b62cbd583a59\" returns successfully" Oct 13 00:13:09.349586 systemd-networkd[1429]: calif3b53135673: Gained IPv6LL Oct 13 00:13:09.633210 containerd[1512]: time="2025-10-13T00:13:09.632643095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:09.633344 containerd[1512]: time="2025-10-13T00:13:09.633201655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 00:13:09.634361 containerd[1512]: time="2025-10-13T00:13:09.634320455Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:09.638504 containerd[1512]: time="2025-10-13T00:13:09.638303696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:09.639345 containerd[1512]: time="2025-10-13T00:13:09.639002936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.801234752s" Oct 13 00:13:09.639345 containerd[1512]: time="2025-10-13T00:13:09.639040296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:13:09.640083 containerd[1512]: time="2025-10-13T00:13:09.640037776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 00:13:09.641239 containerd[1512]: time="2025-10-13T00:13:09.641201816Z" level=info msg="CreateContainer within sandbox \"476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:13:09.648427 containerd[1512]: time="2025-10-13T00:13:09.648371498Z" level=info msg="Container 9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:09.658068 containerd[1512]: time="2025-10-13T00:13:09.657999100Z" level=info msg="CreateContainer within sandbox \"476c928324bc63d0771ae621e3ecae986e95591908f5b3f5e9d53ab676f78df8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067\"" Oct 13 00:13:09.658626 containerd[1512]: time="2025-10-13T00:13:09.658522820Z" level=info msg="StartContainer for \"9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067\"" Oct 13 00:13:09.662381 containerd[1512]: time="2025-10-13T00:13:09.662315260Z" level=info msg="connecting to shim 9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067" address="unix:///run/containerd/s/7d150bf1fa5f092af72d2b94c0e7a66863e1aae0fc925d0d345504161cd7b04c" protocol=ttrpc version=3 Oct 13 00:13:09.670485 kubelet[2635]: I1013 00:13:09.670054 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k8qms" podStartSLOduration=37.670034502 podStartE2EDuration="37.670034502s" podCreationTimestamp="2025-10-13 00:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:13:09.670030262 +0000 UTC m=+44.286009598" watchObservedRunningTime="2025-10-13 00:13:09.670034502 +0000 UTC m=+44.286013918" Oct 13 00:13:09.694696 systemd[1]: Started cri-containerd-9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067.scope - libcontainer container 9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067. Oct 13 00:13:09.746776 containerd[1512]: time="2025-10-13T00:13:09.746714716Z" level=info msg="StartContainer for \"9981bf7d3560450899f642b2d34f7031585735f780f22d4f54ee595acb62f067\" returns successfully" Oct 13 00:13:09.756867 containerd[1512]: time="2025-10-13T00:13:09.756829518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\" id:\"725adc9b6ece1139d7a5c7b6bb6ba3c43854cf183540028e8b3a69893ec20b77\" pid:4930 exit_status:1 exited_at:{seconds:1760314389 nanos:756493278}" Oct 13 00:13:10.053654 systemd-networkd[1429]: caliaf712aa9213: Gained IPv6LL Oct 13 00:13:10.373611 systemd-networkd[1429]: cali88409b4c697: Gained IPv6LL Oct 13 00:13:10.479283 containerd[1512]: time="2025-10-13T00:13:10.479185967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxnvf,Uid:ca6273ce-aeab-40b7-bf01-7ab04b5d2d19,Namespace:calico-system,Attempt:0,}" Oct 13 00:13:10.619134 systemd-networkd[1429]: calif4d2db2226c: Link UP Oct 13 00:13:10.620634 systemd-networkd[1429]: calif4d2db2226c: Gained carrier Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.520 [INFO][4976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fxnvf-eth0 csi-node-driver- calico-system ca6273ce-aeab-40b7-bf01-7ab04b5d2d19 697 0 2025-10-13 00:12:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fxnvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif4d2db2226c [] [] }} ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.520 [INFO][4976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.552 [INFO][4990] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" HandleID="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Workload="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.552 [INFO][4990] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" HandleID="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Workload="localhost-k8s-csi--node--driver--fxnvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fxnvf", "timestamp":"2025-10-13 00:13:10.55228478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.552 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.552 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.552 [INFO][4990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.562 [INFO][4990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.571 [INFO][4990] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.583 [INFO][4990] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.587 [INFO][4990] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.592 [INFO][4990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.592 [INFO][4990] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.594 [INFO][4990] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8 Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.598 [INFO][4990] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.609 [INFO][4990] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.609 [INFO][4990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" host="localhost" Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.609 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:13:10.638358 containerd[1512]: 2025-10-13 00:13:10.609 [INFO][4990] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" HandleID="k8s-pod-network.2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Workload="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.638959 containerd[1512]: 2025-10-13 00:13:10.615 [INFO][4976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fxnvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fxnvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4d2db2226c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:10.638959 containerd[1512]: 2025-10-13 00:13:10.616 [INFO][4976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.638959 containerd[1512]: 2025-10-13 00:13:10.616 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4d2db2226c ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.638959 containerd[1512]: 2025-10-13 00:13:10.620 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.638959 containerd[1512]: 2025-10-13 00:13:10.621 [INFO][4976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fxnvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca6273ce-aeab-40b7-bf01-7ab04b5d2d19", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8", Pod:"csi-node-driver-fxnvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif4d2db2226c", MAC:"56:dc:a9:25:14:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:13:10.638959 containerd[1512]: 2025-10-13 00:13:10.634 [INFO][4976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" Namespace="calico-system" Pod="csi-node-driver-fxnvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxnvf-eth0" Oct 13 00:13:10.678020 kubelet[2635]: I1013 00:13:10.677916 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c44845d57-4zq68" podStartSLOduration=27.87528445 podStartE2EDuration="29.677897842s" podCreationTimestamp="2025-10-13 00:12:41 +0000 UTC" firstStartedPulling="2025-10-13 00:13:07.837258624 +0000 UTC m=+42.453237960" lastFinishedPulling="2025-10-13 00:13:09.639872016 +0000 UTC m=+44.255851352" observedRunningTime="2025-10-13 00:13:10.677824482 +0000 UTC m=+45.293803778" watchObservedRunningTime="2025-10-13 00:13:10.677897842 +0000 UTC m=+45.293877138" Oct 13 00:13:10.716829 containerd[1512]: time="2025-10-13T00:13:10.716650249Z" level=info msg="connecting to shim 2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8" address="unix:///run/containerd/s/a16827948e00add145b4a167b8062dc868c7fa7e598660170224d71b42e4d901" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:13:10.751123 systemd[1]: Started cri-containerd-2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8.scope - libcontainer container 2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8. Oct 13 00:13:10.769144 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:13:10.787570 containerd[1512]: time="2025-10-13T00:13:10.787532981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxnvf,Uid:ca6273ce-aeab-40b7-bf01-7ab04b5d2d19,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8\"" Oct 13 00:13:10.821646 systemd-networkd[1429]: calib1780dbab75: Gained IPv6LL Oct 13 00:13:11.655868 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:54700.service - OpenSSH per-connection server daemon (10.0.0.1:54700). Oct 13 00:13:11.665378 kubelet[2635]: I1013 00:13:11.665327 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:11.718766 systemd-networkd[1429]: calif4d2db2226c: Gained IPv6LL Oct 13 00:13:11.741476 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 54700 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:11.743508 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:11.748503 systemd-logind[1484]: New session 9 of user core. Oct 13 00:13:11.755776 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 00:13:11.818636 containerd[1512]: time="2025-10-13T00:13:11.818588072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:11.819284 containerd[1512]: time="2025-10-13T00:13:11.819257792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 00:13:11.820542 containerd[1512]: time="2025-10-13T00:13:11.820240512Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:11.823138 containerd[1512]: time="2025-10-13T00:13:11.822990153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:11.835393 containerd[1512]: time="2025-10-13T00:13:11.835355355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.195167779s" Oct 13 00:13:11.835393 containerd[1512]: time="2025-10-13T00:13:11.835393595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 00:13:11.837519 containerd[1512]: time="2025-10-13T00:13:11.837029515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:13:11.848602 containerd[1512]: time="2025-10-13T00:13:11.848557717Z" level=info msg="CreateContainer within sandbox \"0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 00:13:11.860159 containerd[1512]: time="2025-10-13T00:13:11.859550999Z" level=info msg="Container 7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:11.866664 containerd[1512]: time="2025-10-13T00:13:11.866630280Z" level=info msg="CreateContainer within sandbox \"0f8c80aa0166d70ec14ce6001d7fad85c3bc2fe8f063425791299944ae31a736\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba\"" Oct 13 00:13:11.867183 containerd[1512]: time="2025-10-13T00:13:11.867163920Z" level=info msg="StartContainer for \"7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba\"" Oct 13 00:13:11.868339 containerd[1512]: time="2025-10-13T00:13:11.868314640Z" level=info msg="connecting to shim 7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba" address="unix:///run/containerd/s/8b019400991b2becf996ce67b69847e377b1495295332d661a1d9f44cf8ad86f" protocol=ttrpc version=3 Oct 13 00:13:11.893831 systemd[1]: Started cri-containerd-7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba.scope - libcontainer container 7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba. Oct 13 00:13:11.969778 containerd[1512]: time="2025-10-13T00:13:11.969663737Z" level=info msg="StartContainer for \"7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba\" returns successfully" Oct 13 00:13:12.023650 kubelet[2635]: I1013 00:13:12.023202 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:12.037695 sshd[5065]: Connection closed by 10.0.0.1 port 54700 Oct 13 00:13:12.038335 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:12.045899 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:54700.service: Deactivated successfully. Oct 13 00:13:12.046291 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Oct 13 00:13:12.048989 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 00:13:12.052975 systemd-logind[1484]: Removed session 9. Oct 13 00:13:12.093948 containerd[1512]: time="2025-10-13T00:13:12.093902396Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:12.095029 containerd[1512]: time="2025-10-13T00:13:12.094689316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:13:12.099518 containerd[1512]: time="2025-10-13T00:13:12.099432877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 262.369802ms" Oct 13 00:13:12.099518 containerd[1512]: time="2025-10-13T00:13:12.099484917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:13:12.110466 containerd[1512]: time="2025-10-13T00:13:12.110400959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 00:13:12.112624 containerd[1512]: time="2025-10-13T00:13:12.112594079Z" level=info msg="CreateContainer within sandbox \"80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:13:12.133462 containerd[1512]: time="2025-10-13T00:13:12.133289802Z" level=info msg="Container dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:12.139691 containerd[1512]: time="2025-10-13T00:13:12.139647243Z" level=info msg="CreateContainer within sandbox \"80c4a07a8463f7520508ab4fe7399a5f4161e30a981a8bc1e3b75cdfcc5d10b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250\"" Oct 13 00:13:12.140141 containerd[1512]: time="2025-10-13T00:13:12.140115123Z" level=info msg="StartContainer for \"dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250\"" Oct 13 00:13:12.143621 containerd[1512]: time="2025-10-13T00:13:12.143590564Z" level=info msg="connecting to shim dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250" address="unix:///run/containerd/s/aaf99f950c8b29d88a26f3310e4b5fc58b6be2931c60723f229237bdf057dfe2" protocol=ttrpc version=3 Oct 13 00:13:12.166665 systemd[1]: Started cri-containerd-dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250.scope - libcontainer container dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250. Oct 13 00:13:12.184112 containerd[1512]: time="2025-10-13T00:13:12.184077210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4\" id:\"58313b8c70a59e3bcca80db40fc7ef3eed7e46dbde6eb499e464b87bb288b3ca\" pid:5137 exited_at:{seconds:1760314392 nanos:183502730}" Oct 13 00:13:12.218475 containerd[1512]: time="2025-10-13T00:13:12.218212535Z" level=info msg="StartContainer for \"dc2751840a076dcbae94b6d42666298849518761674e1b259f3259001a3fe250\" returns successfully" Oct 13 00:13:12.282017 containerd[1512]: time="2025-10-13T00:13:12.281888305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4\" id:\"646f5713b0ba1274d3680214d2f5b7d6715ff4df6e5f5e30760d049e1069ad6a\" pid:5185 exited_at:{seconds:1760314392 nanos:281589545}" Oct 13 00:13:12.685896 kubelet[2635]: I1013 00:13:12.685829 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c44845d57-qzw75" podStartSLOduration=28.509288132000002 podStartE2EDuration="31.685814447s" podCreationTimestamp="2025-10-13 00:12:41 +0000 UTC" firstStartedPulling="2025-10-13 00:13:08.933661484 +0000 UTC m=+43.549640820" lastFinishedPulling="2025-10-13 00:13:12.110187839 +0000 UTC m=+46.726167135" observedRunningTime="2025-10-13 00:13:12.679812446 +0000 UTC m=+47.295791782" watchObservedRunningTime="2025-10-13 00:13:12.685814447 +0000 UTC m=+47.301793783" Oct 13 00:13:12.694209 kubelet[2635]: I1013 00:13:12.694134 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56bc948cb8-hp6bt" podStartSLOduration=23.696011279 podStartE2EDuration="26.694093969s" podCreationTimestamp="2025-10-13 00:12:46 +0000 UTC" firstStartedPulling="2025-10-13 00:13:08.838767745 +0000 UTC m=+43.454747081" lastFinishedPulling="2025-10-13 00:13:11.836850435 +0000 UTC m=+46.452829771" observedRunningTime="2025-10-13 00:13:12.693237088 +0000 UTC m=+47.309216424" watchObservedRunningTime="2025-10-13 00:13:12.694093969 +0000 UTC m=+47.310073345" Oct 13 00:13:13.180692 containerd[1512]: time="2025-10-13T00:13:13.180629482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:13.181492 containerd[1512]: time="2025-10-13T00:13:13.181463042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 00:13:13.183018 containerd[1512]: time="2025-10-13T00:13:13.182970762Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:13.185293 containerd[1512]: time="2025-10-13T00:13:13.185258922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:13.186499 containerd[1512]: time="2025-10-13T00:13:13.186467042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.076016203s" Oct 13 00:13:13.186499 containerd[1512]: time="2025-10-13T00:13:13.186499402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 00:13:13.191346 containerd[1512]: time="2025-10-13T00:13:13.191317243Z" level=info msg="CreateContainer within sandbox \"2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 00:13:13.203655 containerd[1512]: time="2025-10-13T00:13:13.203610805Z" level=info msg="Container cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:13.224466 containerd[1512]: time="2025-10-13T00:13:13.224328208Z" level=info msg="CreateContainer within sandbox \"2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0\"" Oct 13 00:13:13.225041 containerd[1512]: time="2025-10-13T00:13:13.225014728Z" level=info msg="StartContainer for \"cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0\"" Oct 13 00:13:13.228405 containerd[1512]: time="2025-10-13T00:13:13.228010568Z" level=info msg="connecting to shim cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0" address="unix:///run/containerd/s/a16827948e00add145b4a167b8062dc868c7fa7e598660170224d71b42e4d901" protocol=ttrpc version=3 Oct 13 00:13:13.255622 systemd[1]: Started cri-containerd-cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0.scope - libcontainer container cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0. Oct 13 00:13:13.298693 containerd[1512]: time="2025-10-13T00:13:13.298644739Z" level=info msg="StartContainer for \"cd12c1443c8f546f265f4abe52a02ab1bdc1a1c501dff5faf6e7d58cda4c86d0\" returns successfully" Oct 13 00:13:13.299799 containerd[1512]: time="2025-10-13T00:13:13.299774019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 00:13:13.675948 kubelet[2635]: I1013 00:13:13.675914 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:13.707312 containerd[1512]: time="2025-10-13T00:13:13.707270037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba\" id:\"4ee49bd932ba4186171cd2e5a1880ce19f28b7d618307bce41c9159e91b23f61\" pid:5264 exited_at:{seconds:1760314393 nanos:706829557}" Oct 13 00:13:14.493959 containerd[1512]: time="2025-10-13T00:13:14.493915746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:14.495079 containerd[1512]: time="2025-10-13T00:13:14.494875746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 00:13:14.495929 containerd[1512]: time="2025-10-13T00:13:14.495892346Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:14.498053 containerd[1512]: time="2025-10-13T00:13:14.498020867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:13:14.499064 containerd[1512]: time="2025-10-13T00:13:14.499034867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.199228728s" Oct 13 00:13:14.499136 containerd[1512]: time="2025-10-13T00:13:14.499067747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 00:13:14.501134 containerd[1512]: time="2025-10-13T00:13:14.501028627Z" level=info msg="CreateContainer within sandbox \"2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 00:13:14.507611 containerd[1512]: time="2025-10-13T00:13:14.507575828Z" level=info msg="Container d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:13:14.516720 containerd[1512]: time="2025-10-13T00:13:14.516687829Z" level=info msg="CreateContainer within sandbox \"2d0226dec8656f5942c8b6b4fbf71f53ef7c9c448bebc9a65ab2f34f71da8fd8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a\"" Oct 13 00:13:14.517477 containerd[1512]: time="2025-10-13T00:13:14.517263709Z" level=info msg="StartContainer for \"d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a\"" Oct 13 00:13:14.518661 containerd[1512]: time="2025-10-13T00:13:14.518631470Z" level=info msg="connecting to shim d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a" address="unix:///run/containerd/s/a16827948e00add145b4a167b8062dc868c7fa7e598660170224d71b42e4d901" protocol=ttrpc version=3 Oct 13 00:13:14.537622 systemd[1]: Started cri-containerd-d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a.scope - libcontainer container d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a. Oct 13 00:13:14.570144 containerd[1512]: time="2025-10-13T00:13:14.570108996Z" level=info msg="StartContainer for \"d5da2627d8fdd859fe80bbffd03f3b42dc2ec231a1caf7042eba2276c5a14e2a\" returns successfully" Oct 13 00:13:14.691068 kubelet[2635]: I1013 00:13:14.691014 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fxnvf" podStartSLOduration=24.980534847 podStartE2EDuration="28.690996973s" podCreationTimestamp="2025-10-13 00:12:46 +0000 UTC" firstStartedPulling="2025-10-13 00:13:10.789233661 +0000 UTC m=+45.405212997" lastFinishedPulling="2025-10-13 00:13:14.499695827 +0000 UTC m=+49.115675123" observedRunningTime="2025-10-13 00:13:14.690324853 +0000 UTC m=+49.306304189" watchObservedRunningTime="2025-10-13 00:13:14.690996973 +0000 UTC m=+49.306976269" Oct 13 00:13:15.551204 kubelet[2635]: I1013 00:13:15.551156 2635 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 00:13:15.555464 kubelet[2635]: I1013 00:13:15.555433 2635 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 00:13:17.050678 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:53724.service - OpenSSH per-connection server daemon (10.0.0.1:53724). Oct 13 00:13:17.124456 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 53724 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:17.125041 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:17.129198 systemd-logind[1484]: New session 10 of user core. Oct 13 00:13:17.134625 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 00:13:17.329507 sshd[5331]: Connection closed by 10.0.0.1 port 53724 Oct 13 00:13:17.329344 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:17.339647 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:53724.service: Deactivated successfully. Oct 13 00:13:17.341759 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 00:13:17.342711 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Oct 13 00:13:17.345280 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:53738.service - OpenSSH per-connection server daemon (10.0.0.1:53738). Oct 13 00:13:17.347243 systemd-logind[1484]: Removed session 10. Oct 13 00:13:17.408444 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 53738 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:17.409652 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:17.413742 systemd-logind[1484]: New session 11 of user core. Oct 13 00:13:17.426678 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 00:13:17.593207 sshd[5349]: Connection closed by 10.0.0.1 port 53738 Oct 13 00:13:17.594599 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:17.604107 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:53738.service: Deactivated successfully. Oct 13 00:13:17.607751 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 00:13:17.610402 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Oct 13 00:13:17.613909 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:53750.service - OpenSSH per-connection server daemon (10.0.0.1:53750). Oct 13 00:13:17.615138 systemd-logind[1484]: Removed session 11. Oct 13 00:13:17.668797 sshd[5361]: Accepted publickey for core from 10.0.0.1 port 53750 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:17.669926 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:17.674079 systemd-logind[1484]: New session 12 of user core. Oct 13 00:13:17.688642 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 00:13:17.848826 sshd[5364]: Connection closed by 10.0.0.1 port 53750 Oct 13 00:13:17.849588 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:17.853002 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:53750.service: Deactivated successfully. Oct 13 00:13:17.855914 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 00:13:17.856631 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Oct 13 00:13:17.857678 systemd-logind[1484]: Removed session 12. Oct 13 00:13:21.250629 containerd[1512]: time="2025-10-13T00:13:21.250592126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\" id:\"caec1732ba24d0c2e1b9b997d532216f4814803a3f1190a4661a4e556bd2ffc7\" pid:5397 exited_at:{seconds:1760314401 nanos:250293766}" Oct 13 00:13:22.860761 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:53762.service - OpenSSH per-connection server daemon (10.0.0.1:53762). Oct 13 00:13:22.921155 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 53762 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:22.922954 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:22.927494 systemd-logind[1484]: New session 13 of user core. Oct 13 00:13:22.938647 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 00:13:23.140017 sshd[5413]: Connection closed by 10.0.0.1 port 53762 Oct 13 00:13:23.140758 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:23.146571 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:53762.service: Deactivated successfully. Oct 13 00:13:23.148941 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 00:13:23.149745 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Oct 13 00:13:23.151051 systemd-logind[1484]: Removed session 13. Oct 13 00:13:28.155963 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:34068.service - OpenSSH per-connection server daemon (10.0.0.1:34068). Oct 13 00:13:28.209106 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 34068 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:28.210445 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:28.214257 systemd-logind[1484]: New session 14 of user core. Oct 13 00:13:28.226640 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 00:13:28.370922 sshd[5437]: Connection closed by 10.0.0.1 port 34068 Oct 13 00:13:28.371521 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:28.375147 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:34068.service: Deactivated successfully. Oct 13 00:13:28.377843 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 00:13:28.378747 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Oct 13 00:13:28.379930 systemd-logind[1484]: Removed session 14. Oct 13 00:13:31.436099 kubelet[2635]: I1013 00:13:31.436052 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:33.386996 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:34082.service - OpenSSH per-connection server daemon (10.0.0.1:34082). Oct 13 00:13:33.444657 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 34082 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:33.445239 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:33.449427 systemd-logind[1484]: New session 15 of user core. Oct 13 00:13:33.459645 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 00:13:33.634967 sshd[5457]: Connection closed by 10.0.0.1 port 34082 Oct 13 00:13:33.635658 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:33.640408 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:34082.service: Deactivated successfully. Oct 13 00:13:33.642702 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 00:13:33.643972 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Oct 13 00:13:33.645458 systemd-logind[1484]: Removed session 15. Oct 13 00:13:34.167483 kubelet[2635]: I1013 00:13:34.167422 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:13:38.647876 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:50422.service - OpenSSH per-connection server daemon (10.0.0.1:50422). Oct 13 00:13:38.716164 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 50422 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:38.717941 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:38.726309 systemd-logind[1484]: New session 16 of user core. Oct 13 00:13:38.731728 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 00:13:38.911159 sshd[5478]: Connection closed by 10.0.0.1 port 50422 Oct 13 00:13:38.911539 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:38.921978 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:50422.service: Deactivated successfully. Oct 13 00:13:38.923728 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 00:13:38.924634 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Oct 13 00:13:38.926996 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:50434.service - OpenSSH per-connection server daemon (10.0.0.1:50434). Oct 13 00:13:38.929024 systemd-logind[1484]: Removed session 16. Oct 13 00:13:38.980039 sshd[5491]: Accepted publickey for core from 10.0.0.1 port 50434 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:38.981241 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:38.985364 systemd-logind[1484]: New session 17 of user core. Oct 13 00:13:38.993666 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 00:13:39.210334 sshd[5494]: Connection closed by 10.0.0.1 port 50434 Oct 13 00:13:39.211828 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:39.222796 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:50434.service: Deactivated successfully. Oct 13 00:13:39.226170 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 00:13:39.227592 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Oct 13 00:13:39.229660 systemd-logind[1484]: Removed session 17. Oct 13 00:13:39.234306 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:50444.service - OpenSSH per-connection server daemon (10.0.0.1:50444). Oct 13 00:13:39.280437 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 50444 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:39.282126 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:39.288679 systemd-logind[1484]: New session 18 of user core. Oct 13 00:13:39.292599 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 00:13:39.760257 containerd[1512]: time="2025-10-13T00:13:39.760170439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7179ea62c3f41e96177d42c8ff0ba1cc6ea9720a8f3fb31c540e2a28ee77f62\" id:\"a74756971547dcbbd901db12e955ec68a8441be17735e8698bfb731ea10f700d\" pid:5531 exited_at:{seconds:1760314419 nanos:759621035}" Oct 13 00:13:39.947948 sshd[5509]: Connection closed by 10.0.0.1 port 50444 Oct 13 00:13:39.948806 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:39.959689 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:50444.service: Deactivated successfully. Oct 13 00:13:39.963790 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 00:13:39.966457 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Oct 13 00:13:39.971580 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:50448.service - OpenSSH per-connection server daemon (10.0.0.1:50448). Oct 13 00:13:39.973063 systemd-logind[1484]: Removed session 18. Oct 13 00:13:40.039840 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 50448 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:40.041036 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:40.045065 systemd-logind[1484]: New session 19 of user core. Oct 13 00:13:40.058622 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 00:13:40.315146 sshd[5554]: Connection closed by 10.0.0.1 port 50448 Oct 13 00:13:40.315840 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:40.327352 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:50448.service: Deactivated successfully. Oct 13 00:13:40.330497 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 00:13:40.331488 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Oct 13 00:13:40.333498 systemd-logind[1484]: Removed session 19. Oct 13 00:13:40.335406 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:50454.service - OpenSSH per-connection server daemon (10.0.0.1:50454). Oct 13 00:13:40.402052 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 50454 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:40.404990 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:40.411737 systemd-logind[1484]: New session 20 of user core. Oct 13 00:13:40.418674 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 00:13:40.556586 sshd[5568]: Connection closed by 10.0.0.1 port 50454 Oct 13 00:13:40.557152 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:40.560376 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 00:13:40.561356 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Oct 13 00:13:40.561907 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:50454.service: Deactivated successfully. Oct 13 00:13:40.564521 systemd-logind[1484]: Removed session 20. Oct 13 00:13:42.275950 containerd[1512]: time="2025-10-13T00:13:42.275904407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb51738a66f517e0b76117dff18f0009907833862ed59bc8a241233460d8ac4\" id:\"02b82c8791736be854eca063d575136d13bcbb425fed1a291e8db6b8274db046\" pid:5596 exited_at:{seconds:1760314422 nanos:275605285}" Oct 13 00:13:43.167222 containerd[1512]: time="2025-10-13T00:13:43.167157901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba\" id:\"cacc0d1faa49bcef5523c6045a883ffcab62c4fedb11a5cdbe23656074668ab5\" pid:5621 exited_at:{seconds:1760314423 nanos:166879499}" Oct 13 00:13:43.716844 containerd[1512]: time="2025-10-13T00:13:43.716794736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7adff000aca94fd98a96b7f2b9be5d37014d17bb6e04e2e71fb14ed80a059eba\" id:\"40ff21816c34729b20d01f5122eff4a5dc08d14caee3591c365b38a1ca45a689\" pid:5644 exited_at:{seconds:1760314423 nanos:716577615}" Oct 13 00:13:45.572232 systemd[1]: Started sshd@20-10.0.0.101:22-10.0.0.1:55306.service - OpenSSH per-connection server daemon (10.0.0.1:55306). Oct 13 00:13:45.632493 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 55306 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:45.634620 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:45.640232 systemd-logind[1484]: New session 21 of user core. Oct 13 00:13:45.645619 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 00:13:45.775471 sshd[5666]: Connection closed by 10.0.0.1 port 55306 Oct 13 00:13:45.775925 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:45.780003 systemd[1]: sshd@20-10.0.0.101:22-10.0.0.1:55306.service: Deactivated successfully. Oct 13 00:13:45.781857 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 00:13:45.783093 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Oct 13 00:13:45.784354 systemd-logind[1484]: Removed session 21. Oct 13 00:13:50.791643 systemd[1]: Started sshd@21-10.0.0.101:22-10.0.0.1:55312.service - OpenSSH per-connection server daemon (10.0.0.1:55312). Oct 13 00:13:50.857578 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 55312 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:50.859009 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:50.862800 systemd-logind[1484]: New session 22 of user core. Oct 13 00:13:50.869615 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 00:13:51.013518 sshd[5685]: Connection closed by 10.0.0.1 port 55312 Oct 13 00:13:51.013815 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:51.017395 systemd[1]: sshd@21-10.0.0.101:22-10.0.0.1:55312.service: Deactivated successfully. Oct 13 00:13:51.019172 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 00:13:51.019860 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Oct 13 00:13:51.021245 systemd-logind[1484]: Removed session 22. Oct 13 00:13:56.025953 systemd[1]: Started sshd@22-10.0.0.101:22-10.0.0.1:60054.service - OpenSSH per-connection server daemon (10.0.0.1:60054). Oct 13 00:13:56.078651 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 60054 ssh2: RSA SHA256:Aw9oAoWAuMvXj6H09wQbapJ3Oh0AjEUFKiNxNMiNHdw Oct 13 00:13:56.080172 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:13:56.087717 systemd-logind[1484]: New session 23 of user core. Oct 13 00:13:56.096683 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 00:13:56.268786 sshd[5701]: Connection closed by 10.0.0.1 port 60054 Oct 13 00:13:56.269508 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Oct 13 00:13:56.274053 systemd[1]: sshd@22-10.0.0.101:22-10.0.0.1:60054.service: Deactivated successfully. Oct 13 00:13:56.276064 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 00:13:56.276832 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Oct 13 00:13:56.278182 systemd-logind[1484]: Removed session 23.