Jul 9 09:54:23.803913 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 09:54:23.803933 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 08:35:24 -00 2025 Jul 9 09:54:23.803942 kernel: KASLR enabled Jul 9 09:54:23.803948 kernel: efi: EFI v2.7 by EDK II Jul 9 09:54:23.803953 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 9 09:54:23.803958 kernel: random: crng init done Jul 9 09:54:23.803965 kernel: secureboot: Secure boot disabled Jul 9 09:54:23.803971 kernel: ACPI: Early table checksum verification disabled Jul 9 09:54:23.803976 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 9 09:54:23.803983 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 09:54:23.803989 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.803995 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804000 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804006 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804013 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804020 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804027 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804033 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804039 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:54:23.804045 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 09:54:23.804050 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 09:54:23.804057 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 09:54:23.804063 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 9 09:54:23.804069 kernel: Zone ranges: Jul 9 09:54:23.804075 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 09:54:23.804082 kernel: DMA32 empty Jul 9 09:54:23.804087 kernel: Normal empty Jul 9 09:54:23.804093 kernel: Device empty Jul 9 09:54:23.804099 kernel: Movable zone start for each node Jul 9 09:54:23.804105 kernel: Early memory node ranges Jul 9 09:54:23.804111 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 9 09:54:23.804117 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 9 09:54:23.804124 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 9 09:54:23.804130 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 9 09:54:23.804136 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 9 09:54:23.804142 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 9 09:54:23.804148 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 9 09:54:23.804155 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 9 09:54:23.804161 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 9 09:54:23.804167 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 09:54:23.804176 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 09:54:23.804183 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 09:54:23.804190 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 09:54:23.804198 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 09:54:23.804205 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 09:54:23.804212 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 9 09:54:23.804219 kernel: psci: probing for conduit method from ACPI. Jul 9 09:54:23.804225 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 09:54:23.804231 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 09:54:23.804238 kernel: psci: Trusted OS migration not required Jul 9 09:54:23.804245 kernel: psci: SMC Calling Convention v1.1 Jul 9 09:54:23.804252 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 09:54:23.804258 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 09:54:23.804266 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 09:54:23.804273 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 09:54:23.804279 kernel: Detected PIPT I-cache on CPU0 Jul 9 09:54:23.804286 kernel: CPU features: detected: GIC system register CPU interface Jul 9 09:54:23.804292 kernel: CPU features: detected: Spectre-v4 Jul 9 09:54:23.804299 kernel: CPU features: detected: Spectre-BHB Jul 9 09:54:23.804305 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 09:54:23.804311 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 09:54:23.804318 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 09:54:23.804325 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 09:54:23.804331 kernel: alternatives: applying boot alternatives Jul 9 09:54:23.804338 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74a33b1d464884e3b2573e51f747b6939e1912812116b4748b2b08804b5b74c1 Jul 9 09:54:23.804347 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 09:54:23.804353 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 09:54:23.804360 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 09:54:23.804366 kernel: Fallback order for Node 0: 0 Jul 9 09:54:23.804373 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 9 09:54:23.804379 kernel: Policy zone: DMA Jul 9 09:54:23.804399 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 09:54:23.804406 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 9 09:54:23.804412 kernel: software IO TLB: area num 4. Jul 9 09:54:23.804419 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 9 09:54:23.804425 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 9 09:54:23.804433 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 09:54:23.804440 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 09:54:23.804447 kernel: rcu: RCU event tracing is enabled. Jul 9 09:54:23.804454 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 09:54:23.804461 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 09:54:23.804468 kernel: Tracing variant of Tasks RCU enabled. Jul 9 09:54:23.804475 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 09:54:23.804481 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 09:54:23.804488 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 09:54:23.804495 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 09:54:23.804502 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 09:54:23.804510 kernel: GICv3: 256 SPIs implemented Jul 9 09:54:23.804517 kernel: GICv3: 0 Extended SPIs implemented Jul 9 09:54:23.804523 kernel: Root IRQ handler: gic_handle_irq Jul 9 09:54:23.804529 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 09:54:23.804536 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 09:54:23.804543 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 09:54:23.804549 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 09:54:23.804556 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 9 09:54:23.804562 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 9 09:54:23.804568 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 9 09:54:23.804589 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 9 09:54:23.804596 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 09:54:23.804606 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:54:23.804615 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 09:54:23.804621 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 09:54:23.804628 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 09:54:23.804634 kernel: arm-pv: using stolen time PV Jul 9 09:54:23.804641 kernel: Console: colour dummy device 80x25 Jul 9 09:54:23.804648 kernel: ACPI: Core revision 20240827 Jul 9 09:54:23.804654 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 09:54:23.804661 kernel: pid_max: default: 32768 minimum: 301 Jul 9 09:54:23.804667 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 09:54:23.804675 kernel: landlock: Up and running. Jul 9 09:54:23.804682 kernel: SELinux: Initializing. Jul 9 09:54:23.804689 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 09:54:23.804695 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 09:54:23.804702 kernel: rcu: Hierarchical SRCU implementation. Jul 9 09:54:23.804708 kernel: rcu: Max phase no-delay instances is 400. Jul 9 09:54:23.804715 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 09:54:23.804722 kernel: Remapping and enabling EFI services. Jul 9 09:54:23.804736 kernel: smp: Bringing up secondary CPUs ... Jul 9 09:54:23.804750 kernel: Detected PIPT I-cache on CPU1 Jul 9 09:54:23.804757 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 09:54:23.804764 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 9 09:54:23.804772 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:54:23.804779 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 09:54:23.804786 kernel: Detected PIPT I-cache on CPU2 Jul 9 09:54:23.804793 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 09:54:23.804800 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 9 09:54:23.804808 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:54:23.804815 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 09:54:23.804822 kernel: Detected PIPT I-cache on CPU3 Jul 9 09:54:23.804829 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 09:54:23.804836 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 9 09:54:23.804843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:54:23.804850 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 09:54:23.804856 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 09:54:23.804863 kernel: SMP: Total of 4 processors activated. Jul 9 09:54:23.804872 kernel: CPU: All CPU(s) started at EL1 Jul 9 09:54:23.804880 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 09:54:23.804887 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 09:54:23.804894 kernel: CPU features: detected: Common not Private translations Jul 9 09:54:23.804901 kernel: CPU features: detected: CRC32 instructions Jul 9 09:54:23.804908 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 09:54:23.804915 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 09:54:23.804922 kernel: CPU features: detected: LSE atomic instructions Jul 9 09:54:23.804929 kernel: CPU features: detected: Privileged Access Never Jul 9 09:54:23.804937 kernel: CPU features: detected: RAS Extension Support Jul 9 09:54:23.804944 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 09:54:23.804951 kernel: alternatives: applying system-wide alternatives Jul 9 09:54:23.804958 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 9 09:54:23.804965 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 9 09:54:23.804972 kernel: devtmpfs: initialized Jul 9 09:54:23.804979 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 09:54:23.804986 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 09:54:23.804993 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 09:54:23.805001 kernel: 0 pages in range for non-PLT usage Jul 9 09:54:23.805008 kernel: 508448 pages in range for PLT usage Jul 9 09:54:23.805015 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 09:54:23.805021 kernel: SMBIOS 3.0.0 present. Jul 9 09:54:23.805028 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 09:54:23.805035 kernel: DMI: Memory slots populated: 1/1 Jul 9 09:54:23.805042 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 09:54:23.805049 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 09:54:23.805056 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 09:54:23.805064 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 09:54:23.805071 kernel: audit: initializing netlink subsys (disabled) Jul 9 09:54:23.805078 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 9 09:54:23.805085 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 09:54:23.805092 kernel: cpuidle: using governor menu Jul 9 09:54:23.805098 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 09:54:23.805105 kernel: ASID allocator initialised with 32768 entries Jul 9 09:54:23.805112 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 09:54:23.805119 kernel: Serial: AMBA PL011 UART driver Jul 9 09:54:23.805127 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 09:54:23.805134 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 09:54:23.805141 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 09:54:23.805148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 09:54:23.805155 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 09:54:23.805161 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 09:54:23.805168 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 09:54:23.805175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 09:54:23.805182 kernel: ACPI: Added _OSI(Module Device) Jul 9 09:54:23.805190 kernel: ACPI: Added _OSI(Processor Device) Jul 9 09:54:23.805197 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 09:54:23.805204 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 09:54:23.805211 kernel: ACPI: Interpreter enabled Jul 9 09:54:23.805217 kernel: ACPI: Using GIC for interrupt routing Jul 9 09:54:23.805224 kernel: ACPI: MCFG table detected, 1 entries Jul 9 09:54:23.805231 kernel: ACPI: CPU0 has been hot-added Jul 9 09:54:23.805238 kernel: ACPI: CPU1 has been hot-added Jul 9 09:54:23.805245 kernel: ACPI: CPU2 has been hot-added Jul 9 09:54:23.805251 kernel: ACPI: CPU3 has been hot-added Jul 9 09:54:23.805260 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 09:54:23.805267 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 09:54:23.805274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 09:54:23.805409 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 09:54:23.805477 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 09:54:23.805538 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 09:54:23.805614 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 09:54:23.805677 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 09:54:23.805686 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 09:54:23.805693 kernel: PCI host bridge to bus 0000:00 Jul 9 09:54:23.805767 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 09:54:23.805823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 09:54:23.805876 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 09:54:23.805928 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 09:54:23.806006 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 9 09:54:23.806082 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 09:54:23.806146 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 9 09:54:23.806207 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 9 09:54:23.806267 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 09:54:23.806327 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 9 09:54:23.806410 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 9 09:54:23.806476 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 9 09:54:23.806532 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 09:54:23.806646 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 09:54:23.806704 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 09:54:23.806714 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 09:54:23.806722 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 09:54:23.806736 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 09:54:23.806747 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 09:54:23.806755 kernel: iommu: Default domain type: Translated Jul 9 09:54:23.806762 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 09:54:23.806769 kernel: efivars: Registered efivars operations Jul 9 09:54:23.806776 kernel: vgaarb: loaded Jul 9 09:54:23.806783 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 09:54:23.806791 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 09:54:23.806798 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 09:54:23.806805 kernel: pnp: PnP ACPI init Jul 9 09:54:23.806899 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 09:54:23.806912 kernel: pnp: PnP ACPI: found 1 devices Jul 9 09:54:23.806919 kernel: NET: Registered PF_INET protocol family Jul 9 09:54:23.806927 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 09:54:23.806934 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 09:54:23.806941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 09:54:23.806949 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 09:54:23.806956 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 09:54:23.806965 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 09:54:23.806973 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 09:54:23.806980 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 09:54:23.806987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 09:54:23.806994 kernel: PCI: CLS 0 bytes, default 64 Jul 9 09:54:23.807001 kernel: kvm [1]: HYP mode not available Jul 9 09:54:23.807008 kernel: Initialise system trusted keyrings Jul 9 09:54:23.807015 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 09:54:23.807022 kernel: Key type asymmetric registered Jul 9 09:54:23.807031 kernel: Asymmetric key parser 'x509' registered Jul 9 09:54:23.807038 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 09:54:23.807045 kernel: io scheduler mq-deadline registered Jul 9 09:54:23.807055 kernel: io scheduler kyber registered Jul 9 09:54:23.807063 kernel: io scheduler bfq registered Jul 9 09:54:23.807070 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 09:54:23.807077 kernel: ACPI: button: Power Button [PWRB] Jul 9 09:54:23.807085 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 09:54:23.807167 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 09:54:23.807180 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 09:54:23.807187 kernel: thunder_xcv, ver 1.0 Jul 9 09:54:23.807194 kernel: thunder_bgx, ver 1.0 Jul 9 09:54:23.807201 kernel: nicpf, ver 1.0 Jul 9 09:54:23.807208 kernel: nicvf, ver 1.0 Jul 9 09:54:23.807298 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 09:54:23.807358 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T09:54:23 UTC (1752054863) Jul 9 09:54:23.807367 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 09:54:23.807375 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 09:54:23.807384 kernel: watchdog: NMI not fully supported Jul 9 09:54:23.807391 kernel: watchdog: Hard watchdog permanently disabled Jul 9 09:54:23.807398 kernel: NET: Registered PF_INET6 protocol family Jul 9 09:54:23.807404 kernel: Segment Routing with IPv6 Jul 9 09:54:23.807411 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 09:54:23.807418 kernel: NET: Registered PF_PACKET protocol family Jul 9 09:54:23.807425 kernel: Key type dns_resolver registered Jul 9 09:54:23.807432 kernel: registered taskstats version 1 Jul 9 09:54:23.807439 kernel: Loading compiled-in X.509 certificates Jul 9 09:54:23.807447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 3af455426f266805bd3cf61871c72c3a0bf9894a' Jul 9 09:54:23.807454 kernel: Demotion targets for Node 0: null Jul 9 09:54:23.807461 kernel: Key type .fscrypt registered Jul 9 09:54:23.807468 kernel: Key type fscrypt-provisioning registered Jul 9 09:54:23.807475 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 09:54:23.807482 kernel: ima: Allocated hash algorithm: sha1 Jul 9 09:54:23.807489 kernel: ima: No architecture policies found Jul 9 09:54:23.807496 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 09:54:23.807505 kernel: clk: Disabling unused clocks Jul 9 09:54:23.807512 kernel: PM: genpd: Disabling unused power domains Jul 9 09:54:23.807519 kernel: Warning: unable to open an initial console. Jul 9 09:54:23.807526 kernel: Freeing unused kernel memory: 39424K Jul 9 09:54:23.807533 kernel: Run /init as init process Jul 9 09:54:23.807540 kernel: with arguments: Jul 9 09:54:23.807547 kernel: /init Jul 9 09:54:23.807554 kernel: with environment: Jul 9 09:54:23.807560 kernel: HOME=/ Jul 9 09:54:23.807567 kernel: TERM=linux Jul 9 09:54:23.807586 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 09:54:23.807594 systemd[1]: Successfully made /usr/ read-only. Jul 9 09:54:23.807605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 09:54:23.807613 systemd[1]: Detected virtualization kvm. Jul 9 09:54:23.807620 systemd[1]: Detected architecture arm64. Jul 9 09:54:23.807628 systemd[1]: Running in initrd. Jul 9 09:54:23.807635 systemd[1]: No hostname configured, using default hostname. Jul 9 09:54:23.807645 systemd[1]: Hostname set to . Jul 9 09:54:23.807652 systemd[1]: Initializing machine ID from VM UUID. Jul 9 09:54:23.807660 systemd[1]: Queued start job for default target initrd.target. Jul 9 09:54:23.807667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:54:23.807675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:54:23.807684 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 09:54:23.807691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 09:54:23.807699 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 09:54:23.807709 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 09:54:23.807718 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 09:54:23.807726 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 09:54:23.807741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:54:23.807749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:54:23.807757 systemd[1]: Reached target paths.target - Path Units. Jul 9 09:54:23.807765 systemd[1]: Reached target slices.target - Slice Units. Jul 9 09:54:23.807774 systemd[1]: Reached target swap.target - Swaps. Jul 9 09:54:23.807782 systemd[1]: Reached target timers.target - Timer Units. Jul 9 09:54:23.807790 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 09:54:23.807797 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 09:54:23.807805 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 09:54:23.807813 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 09:54:23.807820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:54:23.807828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 09:54:23.807837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:54:23.807844 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 09:54:23.807852 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 09:54:23.807860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 09:54:23.807868 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 09:54:23.807876 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 09:54:23.807883 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 09:54:23.807891 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 09:54:23.807899 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 09:54:23.807908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:54:23.807915 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 09:54:23.807923 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:54:23.807931 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 09:54:23.807940 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 09:54:23.807965 systemd-journald[243]: Collecting audit messages is disabled. Jul 9 09:54:23.807985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:54:23.807993 systemd-journald[243]: Journal started Jul 9 09:54:23.808013 systemd-journald[243]: Runtime Journal (/run/log/journal/69a976d5a8f94def83013ab8921981ab) is 6M, max 48.5M, 42.4M free. Jul 9 09:54:23.793231 systemd-modules-load[244]: Inserted module 'overlay' Jul 9 09:54:23.812588 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 09:54:23.812616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 09:54:23.814116 systemd-modules-load[244]: Inserted module 'br_netfilter' Jul 9 09:54:23.814823 kernel: Bridge firewalling registered Jul 9 09:54:23.817639 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 09:54:23.832704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 09:54:23.833773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:54:23.837749 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:54:23.838990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 09:54:23.843937 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 09:54:23.848054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:54:23.849815 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 09:54:23.855441 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 09:54:23.856521 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:54:23.857640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:54:23.861988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:54:23.865484 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 09:54:23.868621 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74a33b1d464884e3b2573e51f747b6939e1912812116b4748b2b08804b5b74c1 Jul 9 09:54:23.918049 systemd-resolved[298]: Positive Trust Anchors: Jul 9 09:54:23.918068 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 09:54:23.918100 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 09:54:23.927623 systemd-resolved[298]: Defaulting to hostname 'linux'. Jul 9 09:54:23.928773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 09:54:23.931469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:54:23.952611 kernel: SCSI subsystem initialized Jul 9 09:54:23.956606 kernel: Loading iSCSI transport class v2.0-870. Jul 9 09:54:23.964622 kernel: iscsi: registered transport (tcp) Jul 9 09:54:23.979598 kernel: iscsi: registered transport (qla4xxx) Jul 9 09:54:23.979631 kernel: QLogic iSCSI HBA Driver Jul 9 09:54:23.998503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 09:54:24.023614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:54:24.024837 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 09:54:24.071413 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 09:54:24.073448 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 09:54:24.135605 kernel: raid6: neonx8 gen() 15786 MB/s Jul 9 09:54:24.152593 kernel: raid6: neonx4 gen() 15815 MB/s Jul 9 09:54:24.169589 kernel: raid6: neonx2 gen() 13214 MB/s Jul 9 09:54:24.186590 kernel: raid6: neonx1 gen() 10475 MB/s Jul 9 09:54:24.203589 kernel: raid6: int64x8 gen() 6887 MB/s Jul 9 09:54:24.220589 kernel: raid6: int64x4 gen() 7349 MB/s Jul 9 09:54:24.237590 kernel: raid6: int64x2 gen() 6098 MB/s Jul 9 09:54:24.254592 kernel: raid6: int64x1 gen() 5046 MB/s Jul 9 09:54:24.254616 kernel: raid6: using algorithm neonx4 gen() 15815 MB/s Jul 9 09:54:24.271599 kernel: raid6: .... xor() 12336 MB/s, rmw enabled Jul 9 09:54:24.271619 kernel: raid6: using neon recovery algorithm Jul 9 09:54:24.277681 kernel: xor: measuring software checksum speed Jul 9 09:54:24.277706 kernel: 8regs : 21025 MB/sec Jul 9 09:54:24.277716 kernel: 32regs : 21670 MB/sec Jul 9 09:54:24.278631 kernel: arm64_neon : 27096 MB/sec Jul 9 09:54:24.278646 kernel: xor: using function: arm64_neon (27096 MB/sec) Jul 9 09:54:24.339601 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 09:54:24.345881 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 09:54:24.348116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:54:24.381773 systemd-udevd[501]: Using default interface naming scheme 'v255'. Jul 9 09:54:24.386206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:54:24.387889 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 09:54:24.412271 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jul 9 09:54:24.435413 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 09:54:24.437474 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 09:54:24.490611 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:54:24.492856 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 09:54:24.547744 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 09:54:24.553007 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 09:54:24.554797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:54:24.554917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:54:24.557040 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:54:24.560734 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:54:24.567882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 09:54:24.567933 kernel: GPT:9289727 != 19775487 Jul 9 09:54:24.567943 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 09:54:24.567952 kernel: GPT:9289727 != 19775487 Jul 9 09:54:24.567960 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 09:54:24.568758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:54:24.588921 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 09:54:24.591198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:54:24.603612 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 09:54:24.615917 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 09:54:24.623219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 09:54:24.629099 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 09:54:24.630039 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 09:54:24.631746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 09:54:24.633997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:54:24.635715 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 09:54:24.638326 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 09:54:24.639923 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 09:54:24.657922 disk-uuid[593]: Primary Header is updated. Jul 9 09:54:24.657922 disk-uuid[593]: Secondary Entries is updated. Jul 9 09:54:24.657922 disk-uuid[593]: Secondary Header is updated. Jul 9 09:54:24.658493 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 09:54:24.663595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:54:25.672829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:54:25.672897 disk-uuid[599]: The operation has completed successfully. Jul 9 09:54:25.711378 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 09:54:25.711475 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 09:54:25.732741 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 09:54:25.761192 sh[614]: Success Jul 9 09:54:25.774616 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 09:54:25.774671 kernel: device-mapper: uevent: version 1.0.3 Jul 9 09:54:25.775899 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 09:54:25.784833 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 09:54:25.819183 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 09:54:25.822210 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 09:54:25.837911 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 09:54:25.847165 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 09:54:25.847224 kernel: BTRFS: device fsid b890ad05-381e-41d5-a872-05bd1f9d6a23 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (626) Jul 9 09:54:25.848263 kernel: BTRFS info (device dm-0): first mount of filesystem b890ad05-381e-41d5-a872-05bd1f9d6a23 Jul 9 09:54:25.848285 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:54:25.848947 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 09:54:25.853246 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 09:54:25.854311 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 09:54:25.855330 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 09:54:25.856143 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 09:54:25.858953 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 09:54:25.886207 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (657) Jul 9 09:54:25.886263 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 09:54:25.886274 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:54:25.887612 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 09:54:25.896593 kernel: BTRFS info (device vda6): last unmount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 09:54:25.897297 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 09:54:25.899626 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 09:54:25.968615 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 09:54:25.970968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 09:54:26.026254 systemd-networkd[803]: lo: Link UP Jul 9 09:54:26.026971 systemd-networkd[803]: lo: Gained carrier Jul 9 09:54:26.027796 systemd-networkd[803]: Enumeration completed Jul 9 09:54:26.029012 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:54:26.029016 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 09:54:26.029067 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 09:54:26.030105 systemd[1]: Reached target network.target - Network. Jul 9 09:54:26.031319 systemd-networkd[803]: eth0: Link UP Jul 9 09:54:26.031322 systemd-networkd[803]: eth0: Gained carrier Jul 9 09:54:26.031332 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:54:26.052641 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 09:54:26.064641 ignition[700]: Ignition 2.21.0 Jul 9 09:54:26.064656 ignition[700]: Stage: fetch-offline Jul 9 09:54:26.064711 ignition[700]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:54:26.064728 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:54:26.064944 ignition[700]: parsed url from cmdline: "" Jul 9 09:54:26.064948 ignition[700]: no config URL provided Jul 9 09:54:26.064953 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 09:54:26.064961 ignition[700]: no config at "/usr/lib/ignition/user.ign" Jul 9 09:54:26.064986 ignition[700]: op(1): [started] loading QEMU firmware config module Jul 9 09:54:26.064991 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 09:54:26.074315 ignition[700]: op(1): [finished] loading QEMU firmware config module Jul 9 09:54:26.080117 ignition[700]: parsing config with SHA512: 02e8136881e30bd29d16dc8534c0f814d1bf42dedd61cff6424be7feea3444abfdea72b1cadc4a633e65ae25f8e594c90ecd55ca9027686dd99a250e231de7f3 Jul 9 09:54:26.082833 unknown[700]: fetched base config from "system" Jul 9 09:54:26.082847 unknown[700]: fetched user config from "qemu" Jul 9 09:54:26.083095 ignition[700]: fetch-offline: fetch-offline passed Jul 9 09:54:26.083152 ignition[700]: Ignition finished successfully Jul 9 09:54:26.085416 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 09:54:26.086736 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 09:54:26.087492 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 09:54:26.114389 ignition[817]: Ignition 2.21.0 Jul 9 09:54:26.114408 ignition[817]: Stage: kargs Jul 9 09:54:26.114532 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:54:26.114541 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:54:26.115098 ignition[817]: kargs: kargs passed Jul 9 09:54:26.115138 ignition[817]: Ignition finished successfully Jul 9 09:54:26.120611 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 09:54:26.122328 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 09:54:26.157093 ignition[825]: Ignition 2.21.0 Jul 9 09:54:26.157861 ignition[825]: Stage: disks Jul 9 09:54:26.158019 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:54:26.158029 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:54:26.158630 ignition[825]: disks: disks passed Jul 9 09:54:26.158676 ignition[825]: Ignition finished successfully Jul 9 09:54:26.161117 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 09:54:26.162540 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 09:54:26.163793 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 09:54:26.165358 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 09:54:26.166784 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 09:54:26.168058 systemd[1]: Reached target basic.target - Basic System. Jul 9 09:54:26.170495 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 09:54:26.203163 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 09:54:26.207191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 09:54:26.209405 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 09:54:26.283596 kernel: EXT4-fs (vda9): mounted filesystem 83f4d40b-59ad-4dad-9ca3-9ab67909ff35 r/w with ordered data mode. Quota mode: none. Jul 9 09:54:26.284083 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 09:54:26.285166 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 09:54:26.287210 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 09:54:26.288652 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 09:54:26.289468 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 09:54:26.289514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 09:54:26.289538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 09:54:26.302316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 09:54:26.304831 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 09:54:26.308207 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (842) Jul 9 09:54:26.308230 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 09:54:26.309165 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:54:26.309190 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 09:54:26.312510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 09:54:26.351618 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 09:54:26.355655 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Jul 9 09:54:26.359307 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 09:54:26.362148 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 09:54:26.432637 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 09:54:26.434358 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 09:54:26.435708 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 09:54:26.453612 kernel: BTRFS info (device vda6): last unmount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 09:54:26.472613 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 09:54:26.482981 ignition[956]: INFO : Ignition 2.21.0 Jul 9 09:54:26.482981 ignition[956]: INFO : Stage: mount Jul 9 09:54:26.485069 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:54:26.485069 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:54:26.486489 ignition[956]: INFO : mount: mount passed Jul 9 09:54:26.486489 ignition[956]: INFO : Ignition finished successfully Jul 9 09:54:26.488094 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 09:54:26.489845 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 09:54:26.846591 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 09:54:26.849101 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 09:54:26.873590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (970) Jul 9 09:54:26.875227 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 09:54:26.875246 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:54:26.875256 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 09:54:26.878205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 09:54:26.904117 ignition[987]: INFO : Ignition 2.21.0 Jul 9 09:54:26.904117 ignition[987]: INFO : Stage: files Jul 9 09:54:26.905468 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:54:26.905468 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:54:26.907234 ignition[987]: DEBUG : files: compiled without relabeling support, skipping Jul 9 09:54:26.907234 ignition[987]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 09:54:26.907234 ignition[987]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 09:54:26.910619 ignition[987]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 09:54:26.911665 ignition[987]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 09:54:26.911665 ignition[987]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 09:54:26.911240 unknown[987]: wrote ssh authorized keys file for user: core Jul 9 09:54:26.914732 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 9 09:54:26.914732 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 09:54:26.917768 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 09:54:26.919210 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 09:54:26.919210 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 09:54:26.922605 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 09:54:26.924730 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 09:54:26.924730 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 9 09:54:27.526124 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 9 09:54:27.735585 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 09:54:27.735585 ignition[987]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 9 09:54:27.738455 ignition[987]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 09:54:27.746646 ignition[987]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 09:54:27.746646 ignition[987]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 9 09:54:27.746646 ignition[987]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 09:54:27.755733 systemd-networkd[803]: eth0: Gained IPv6LL Jul 9 09:54:27.766339 ignition[987]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 09:54:27.770395 ignition[987]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 09:54:27.771668 ignition[987]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 09:54:27.771668 ignition[987]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 09:54:27.771668 ignition[987]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 09:54:27.771668 ignition[987]: INFO : files: files passed Jul 9 09:54:27.771668 ignition[987]: INFO : Ignition finished successfully Jul 9 09:54:27.773320 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 09:54:27.776143 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 09:54:27.778230 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 09:54:27.788767 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 09:54:27.788872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 09:54:27.791467 initrd-setup-root-after-ignition[1016]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 09:54:27.793773 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:54:27.793773 initrd-setup-root-after-ignition[1018]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:54:27.796694 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:54:27.797696 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 09:54:27.799002 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 09:54:27.801317 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 09:54:27.839062 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 09:54:27.839200 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 09:54:27.840876 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 09:54:27.843737 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 09:54:27.844514 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 09:54:27.845394 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 09:54:27.875711 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 09:54:27.879907 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 09:54:27.903104 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:54:27.904839 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:54:27.905841 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 09:54:27.907171 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 09:54:27.907299 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 09:54:27.909138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 09:54:27.910562 systemd[1]: Stopped target basic.target - Basic System. Jul 9 09:54:27.911746 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 09:54:27.913010 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 09:54:27.914420 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 09:54:27.915924 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 09:54:27.917399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 09:54:27.918713 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 09:54:27.920132 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 09:54:27.921533 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 09:54:27.922815 systemd[1]: Stopped target swap.target - Swaps. Jul 9 09:54:27.923912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 09:54:27.924042 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 09:54:27.925711 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:54:27.927128 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:54:27.928473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 09:54:27.931645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:54:27.932591 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 09:54:27.932715 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 09:54:27.934867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 09:54:27.934985 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 09:54:27.936421 systemd[1]: Stopped target paths.target - Path Units. Jul 9 09:54:27.937589 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 09:54:27.938681 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:54:27.939872 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 09:54:27.941263 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 09:54:27.942852 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 09:54:27.942944 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 09:54:27.944092 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 09:54:27.944172 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 09:54:27.945341 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 09:54:27.945461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 09:54:27.946695 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 09:54:27.946802 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 09:54:27.948875 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 09:54:27.950531 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 09:54:27.951229 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 09:54:27.951352 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:54:27.953023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 09:54:27.953127 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 09:54:27.957901 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 09:54:27.968618 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 09:54:27.978566 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 09:54:27.981593 ignition[1042]: INFO : Ignition 2.21.0 Jul 9 09:54:27.984926 ignition[1042]: INFO : Stage: umount Jul 9 09:54:27.985805 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:54:27.985805 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:54:27.987357 ignition[1042]: INFO : umount: umount passed Jul 9 09:54:27.987357 ignition[1042]: INFO : Ignition finished successfully Jul 9 09:54:27.988511 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 09:54:27.988630 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 09:54:27.990245 systemd[1]: Stopped target network.target - Network. Jul 9 09:54:27.992074 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 09:54:27.992142 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 09:54:27.993667 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 09:54:27.993706 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 09:54:27.995203 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 09:54:27.995254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 09:54:27.996565 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 09:54:27.996618 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 09:54:27.997992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 09:54:27.999351 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 09:54:28.006516 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 09:54:28.006674 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 09:54:28.010617 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 09:54:28.011453 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 09:54:28.011551 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:54:28.014866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 09:54:28.015085 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 09:54:28.015173 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 09:54:28.017993 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 09:54:28.018120 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 09:54:28.019601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 09:54:28.019634 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:54:28.021713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 09:54:28.023154 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 09:54:28.023212 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 09:54:28.024608 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 09:54:28.024645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:54:28.028442 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 09:54:28.028490 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 09:54:28.029857 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:54:28.034150 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 09:54:28.038155 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 09:54:28.038978 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 09:54:28.039881 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 09:54:28.039924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 09:54:28.044434 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 09:54:28.044549 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 09:54:28.053224 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 09:54:28.054017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:54:28.055136 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 09:54:28.055172 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 09:54:28.056473 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 09:54:28.056499 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:54:28.057807 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 09:54:28.057849 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 09:54:28.059773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 09:54:28.059812 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 09:54:28.061665 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 09:54:28.061708 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:54:28.064417 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 09:54:28.065702 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 09:54:28.065761 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:54:28.068198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 09:54:28.068239 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:54:28.070639 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 09:54:28.070677 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:54:28.073027 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 09:54:28.073066 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:54:28.074533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:54:28.074580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:54:28.080795 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 09:54:28.080902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 09:54:28.082633 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 09:54:28.084823 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 09:54:28.120662 systemd[1]: Switching root. Jul 9 09:54:28.157888 systemd-journald[243]: Journal stopped Jul 9 09:54:28.901872 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Jul 9 09:54:28.901925 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 09:54:28.901939 kernel: SELinux: policy capability open_perms=1 Jul 9 09:54:28.901948 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 09:54:28.901958 kernel: SELinux: policy capability always_check_network=0 Jul 9 09:54:28.901970 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 09:54:28.901980 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 09:54:28.901990 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 09:54:28.901999 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 09:54:28.902008 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 09:54:28.902017 kernel: audit: type=1403 audit(1752054868.307:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 09:54:28.902031 systemd[1]: Successfully loaded SELinux policy in 57.189ms. Jul 9 09:54:28.902051 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.655ms. Jul 9 09:54:28.902063 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 09:54:28.902074 systemd[1]: Detected virtualization kvm. Jul 9 09:54:28.902084 systemd[1]: Detected architecture arm64. Jul 9 09:54:28.902095 systemd[1]: Detected first boot. Jul 9 09:54:28.902107 systemd[1]: Initializing machine ID from VM UUID. Jul 9 09:54:28.902118 zram_generator::config[1087]: No configuration found. Jul 9 09:54:28.902132 kernel: NET: Registered PF_VSOCK protocol family Jul 9 09:54:28.902142 systemd[1]: Populated /etc with preset unit settings. Jul 9 09:54:28.902155 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 09:54:28.902165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 09:54:28.902175 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 09:54:28.902187 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 09:54:28.902198 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 09:54:28.902209 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 09:54:28.902219 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 09:54:28.902229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 09:54:28.902239 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 09:54:28.902251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 09:54:28.902261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 09:54:28.902271 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 09:54:28.902282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:54:28.902294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:54:28.902305 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 09:54:28.902315 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 09:54:28.902325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 09:54:28.902335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 09:54:28.902346 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 09:54:28.902357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:54:28.902367 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:54:28.902377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 09:54:28.902387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 09:54:28.902397 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 09:54:28.902406 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 09:54:28.902417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:54:28.902427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 09:54:28.902437 systemd[1]: Reached target slices.target - Slice Units. Jul 9 09:54:28.902450 systemd[1]: Reached target swap.target - Swaps. Jul 9 09:54:28.902461 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 09:54:28.902470 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 09:54:28.902482 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 09:54:28.902492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:54:28.902502 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 09:54:28.902512 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:54:28.902523 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 09:54:28.902533 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 09:54:28.902543 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 09:54:28.902553 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 09:54:28.902562 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 09:54:28.902584 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 09:54:28.902607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 09:54:28.902620 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 09:54:28.902633 systemd[1]: Reached target machines.target - Containers. Jul 9 09:54:28.902642 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 09:54:28.902653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:54:28.902662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 09:54:28.902672 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 09:54:28.902682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:54:28.902693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 09:54:28.902702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:54:28.902712 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 09:54:28.902732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:54:28.902743 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 09:54:28.902753 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 09:54:28.902763 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 09:54:28.902773 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 09:54:28.902783 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 09:54:28.902794 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:54:28.902803 kernel: fuse: init (API version 7.41) Jul 9 09:54:28.902814 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 09:54:28.902825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 09:54:28.902835 kernel: loop: module loaded Jul 9 09:54:28.902845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 09:54:28.902856 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 09:54:28.902866 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 09:54:28.902876 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 09:54:28.902887 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 09:54:28.902897 systemd[1]: Stopped verity-setup.service. Jul 9 09:54:28.902907 kernel: ACPI: bus type drm_connector registered Jul 9 09:54:28.902917 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 09:54:28.902927 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 09:54:28.902937 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 09:54:28.902947 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 09:54:28.902958 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 09:54:28.902994 systemd-journald[1159]: Collecting audit messages is disabled. Jul 9 09:54:28.903016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 09:54:28.903026 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:54:28.903039 systemd-journald[1159]: Journal started Jul 9 09:54:28.903059 systemd-journald[1159]: Runtime Journal (/run/log/journal/69a976d5a8f94def83013ab8921981ab) is 6M, max 48.5M, 42.4M free. Jul 9 09:54:28.705659 systemd[1]: Queued start job for default target multi-user.target. Jul 9 09:54:28.714559 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 09:54:28.714958 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 09:54:28.905975 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 09:54:28.906822 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 09:54:28.907925 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 09:54:28.908099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 09:54:28.909252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:54:28.909405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:54:28.910831 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 09:54:28.910990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 09:54:28.912006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:54:28.912176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:54:28.913410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 09:54:28.913565 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 09:54:28.914887 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:54:28.915058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:54:28.916167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 09:54:28.917602 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:54:28.918912 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 09:54:28.920236 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 09:54:28.933333 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 09:54:28.936350 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 09:54:28.938282 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 09:54:28.939257 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 09:54:28.939289 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 09:54:28.941035 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 09:54:28.951392 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 09:54:28.952445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:54:28.953792 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 09:54:28.955557 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 09:54:28.956709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 09:54:28.958784 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 09:54:28.960727 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 09:54:28.961640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:54:28.962980 systemd-journald[1159]: Time spent on flushing to /var/log/journal/69a976d5a8f94def83013ab8921981ab is 15.347ms for 866 entries. Jul 9 09:54:28.962980 systemd-journald[1159]: System Journal (/var/log/journal/69a976d5a8f94def83013ab8921981ab) is 8M, max 195.6M, 187.6M free. Jul 9 09:54:28.986283 systemd-journald[1159]: Received client request to flush runtime journal. Jul 9 09:54:28.964841 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 09:54:28.967424 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 09:54:28.970693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:54:28.971805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 09:54:28.972766 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 09:54:28.991126 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 09:54:28.994604 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 09:54:28.996228 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 09:54:28.997662 kernel: loop0: detected capacity change from 0 to 134232 Jul 9 09:54:29.001774 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 09:54:29.003875 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jul 9 09:54:29.004219 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jul 9 09:54:29.008471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:54:29.010473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:54:29.015059 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 09:54:29.020591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 09:54:29.032785 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 09:54:29.042757 kernel: loop1: detected capacity change from 0 to 105936 Jul 9 09:54:29.050689 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 09:54:29.052920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 09:54:29.067598 kernel: loop2: detected capacity change from 0 to 207008 Jul 9 09:54:29.078814 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jul 9 09:54:29.079180 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jul 9 09:54:29.083307 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:54:29.093737 kernel: loop3: detected capacity change from 0 to 134232 Jul 9 09:54:29.101605 kernel: loop4: detected capacity change from 0 to 105936 Jul 9 09:54:29.106612 kernel: loop5: detected capacity change from 0 to 207008 Jul 9 09:54:29.110437 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 09:54:29.110861 (sd-merge)[1228]: Merged extensions into '/usr'. Jul 9 09:54:29.116844 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 09:54:29.116942 systemd[1]: Reloading... Jul 9 09:54:29.154703 zram_generator::config[1250]: No configuration found. Jul 9 09:54:29.236665 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 09:54:29.263485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:54:29.328048 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 09:54:29.328245 systemd[1]: Reloading finished in 210 ms. Jul 9 09:54:29.363448 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 09:54:29.366842 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 09:54:29.379897 systemd[1]: Starting ensure-sysext.service... Jul 9 09:54:29.381558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 09:54:29.396156 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 09:54:29.396498 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 09:54:29.396862 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 09:54:29.397160 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 09:54:29.397895 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 09:54:29.398207 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jul 9 09:54:29.398325 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jul 9 09:54:29.401173 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 09:54:29.401277 systemd-tmpfiles[1291]: Skipping /boot Jul 9 09:54:29.405720 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Jul 9 09:54:29.405735 systemd[1]: Reloading... Jul 9 09:54:29.407383 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 09:54:29.407471 systemd-tmpfiles[1291]: Skipping /boot Jul 9 09:54:29.457614 zram_generator::config[1318]: No configuration found. Jul 9 09:54:29.524021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:54:29.586756 systemd[1]: Reloading finished in 180 ms. Jul 9 09:54:29.609410 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 09:54:29.615120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:54:29.626666 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 09:54:29.628818 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 09:54:29.630841 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 09:54:29.633450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 09:54:29.637057 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:54:29.646325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 09:54:29.655559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:54:29.664777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:54:29.666653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:54:29.685098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:54:29.685740 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Jul 9 09:54:29.686084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:54:29.686241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:54:29.689633 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 09:54:29.691542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:54:29.691780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:54:29.693166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:54:29.693352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:54:29.694939 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:54:29.695116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:54:29.705749 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 09:54:29.707380 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 09:54:29.712833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:54:29.716065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:54:29.717747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:54:29.721676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 09:54:29.723733 augenrules[1403]: No rules Jul 9 09:54:29.724231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:54:29.731707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:54:29.732570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:54:29.732634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:54:29.734156 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 09:54:29.741802 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 09:54:29.746863 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 09:54:29.747618 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 09:54:29.749074 systemd[1]: Finished ensure-sysext.service. Jul 9 09:54:29.752200 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 09:54:29.752459 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 09:54:29.790025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:54:29.792660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:54:29.799936 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 09:54:29.803472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:54:29.803684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:54:29.804955 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 09:54:29.805375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 09:54:29.807451 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:54:29.809659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:54:29.813391 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 09:54:29.823153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 09:54:29.823209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 09:54:29.826739 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 09:54:29.829452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 09:54:29.834549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 09:54:29.850031 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 09:54:29.870667 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 09:54:29.895671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:54:29.964821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:54:29.982313 systemd-networkd[1424]: lo: Link UP Jul 9 09:54:29.982320 systemd-networkd[1424]: lo: Gained carrier Jul 9 09:54:29.983213 systemd-networkd[1424]: Enumeration completed Jul 9 09:54:29.983332 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 09:54:29.987228 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 09:54:29.989553 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:54:29.989566 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 09:54:29.990239 systemd-networkd[1424]: eth0: Link UP Jul 9 09:54:29.990348 systemd-networkd[1424]: eth0: Gained carrier Jul 9 09:54:29.990363 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:54:29.990828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 09:54:29.994799 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 09:54:29.995698 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 09:54:30.010134 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 09:54:30.010933 systemd-resolved[1357]: Positive Trust Anchors: Jul 9 09:54:30.011178 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 09:54:30.011258 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 09:54:30.011407 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Jul 9 09:54:30.014304 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 09:54:30.014356 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-07-09 09:54:29.833029 UTC. Jul 9 09:54:30.017643 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 09:54:30.020889 systemd-resolved[1357]: Defaulting to hostname 'linux'. Jul 9 09:54:30.024175 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 09:54:30.025037 systemd[1]: Reached target network.target - Network. Jul 9 09:54:30.025805 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:54:30.026609 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 09:54:30.027388 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 09:54:30.028286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 09:54:30.029316 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 09:54:30.030242 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 09:54:30.031165 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 09:54:30.032042 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 09:54:30.032077 systemd[1]: Reached target paths.target - Path Units. Jul 9 09:54:30.032703 systemd[1]: Reached target timers.target - Timer Units. Jul 9 09:54:30.034093 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 09:54:30.036044 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 09:54:30.038499 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 09:54:30.039646 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 09:54:30.040508 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 09:54:30.044356 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 09:54:30.045422 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 09:54:30.046829 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 09:54:30.047658 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 09:54:30.048319 systemd[1]: Reached target basic.target - Basic System. Jul 9 09:54:30.049020 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 09:54:30.049050 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 09:54:30.049958 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 09:54:30.051587 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 09:54:30.053139 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 09:54:30.054785 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 09:54:30.057526 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 09:54:30.058342 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 09:54:30.059224 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 09:54:30.061678 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 09:54:30.063670 jq[1483]: false Jul 9 09:54:30.065675 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 09:54:30.068803 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 09:54:30.070724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 09:54:30.071129 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 09:54:30.072732 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 09:54:30.077652 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 09:54:30.082226 extend-filesystems[1484]: Found /dev/vda6 Jul 9 09:54:30.087613 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 09:54:30.088825 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 09:54:30.088990 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 09:54:30.089219 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 09:54:30.089381 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 09:54:30.090647 extend-filesystems[1484]: Found /dev/vda9 Jul 9 09:54:30.092003 jq[1496]: true Jul 9 09:54:30.090888 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 09:54:30.091070 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 09:54:30.093590 extend-filesystems[1484]: Checking size of /dev/vda9 Jul 9 09:54:30.105323 extend-filesystems[1484]: Resized partition /dev/vda9 Jul 9 09:54:30.108892 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 09:54:30.113185 extend-filesystems[1518]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 09:54:30.116993 jq[1506]: true Jul 9 09:54:30.123667 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 09:54:30.147172 update_engine[1493]: I20250709 09:54:30.146924 1493 main.cc:92] Flatcar Update Engine starting Jul 9 09:54:30.154444 dbus-daemon[1481]: [system] SELinux support is enabled Jul 9 09:54:30.154831 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 09:54:30.158429 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 09:54:30.158835 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 09:54:30.160455 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 09:54:30.160582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 09:54:30.163365 update_engine[1493]: I20250709 09:54:30.163278 1493 update_check_scheduler.cc:74] Next update check in 10m57s Jul 9 09:54:30.164167 systemd[1]: Started update-engine.service - Update Engine. Jul 9 09:54:30.167935 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 09:54:30.172596 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 09:54:30.188654 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 09:54:30.188654 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 09:54:30.188654 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 09:54:30.192386 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Jul 9 09:54:30.191976 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 09:54:30.193359 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 09:54:30.196059 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Jul 9 09:54:30.197026 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 09:54:30.197419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 09:54:30.199647 systemd-logind[1492]: New seat seat0. Jul 9 09:54:30.201304 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 09:54:30.202332 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 09:54:30.234774 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 09:54:30.307846 containerd[1507]: time="2025-07-09T09:54:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 09:54:30.310605 containerd[1507]: time="2025-07-09T09:54:30.309672520Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 09:54:30.318441 containerd[1507]: time="2025-07-09T09:54:30.318397680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.56µs" Jul 9 09:54:30.318441 containerd[1507]: time="2025-07-09T09:54:30.318432840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 09:54:30.318521 containerd[1507]: time="2025-07-09T09:54:30.318450840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 09:54:30.318650 containerd[1507]: time="2025-07-09T09:54:30.318617720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 09:54:30.318650 containerd[1507]: time="2025-07-09T09:54:30.318641760Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 09:54:30.318720 containerd[1507]: time="2025-07-09T09:54:30.318665720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 09:54:30.318740 containerd[1507]: time="2025-07-09T09:54:30.318727960Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 09:54:30.318758 containerd[1507]: time="2025-07-09T09:54:30.318742440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 09:54:30.318994 containerd[1507]: time="2025-07-09T09:54:30.318961560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 09:54:30.318994 containerd[1507]: time="2025-07-09T09:54:30.318983560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 09:54:30.319039 containerd[1507]: time="2025-07-09T09:54:30.318995120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 09:54:30.319039 containerd[1507]: time="2025-07-09T09:54:30.319003880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 09:54:30.319093 containerd[1507]: time="2025-07-09T09:54:30.319077960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 09:54:30.319291 containerd[1507]: time="2025-07-09T09:54:30.319268320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 09:54:30.319314 containerd[1507]: time="2025-07-09T09:54:30.319300480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 09:54:30.319332 containerd[1507]: time="2025-07-09T09:54:30.319312240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 09:54:30.319349 containerd[1507]: time="2025-07-09T09:54:30.319342400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 09:54:30.319576 containerd[1507]: time="2025-07-09T09:54:30.319552640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 09:54:30.319645 containerd[1507]: time="2025-07-09T09:54:30.319630760Z" level=info msg="metadata content store policy set" policy=shared Jul 9 09:54:30.322549 containerd[1507]: time="2025-07-09T09:54:30.322513160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 09:54:30.322622 containerd[1507]: time="2025-07-09T09:54:30.322561320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 09:54:30.322622 containerd[1507]: time="2025-07-09T09:54:30.322588640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 09:54:30.322622 containerd[1507]: time="2025-07-09T09:54:30.322601400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 09:54:30.322622 containerd[1507]: time="2025-07-09T09:54:30.322612320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 09:54:30.322622 containerd[1507]: time="2025-07-09T09:54:30.322623760Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 09:54:30.322701 containerd[1507]: time="2025-07-09T09:54:30.322635280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 09:54:30.322701 containerd[1507]: time="2025-07-09T09:54:30.322652320Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 09:54:30.322701 containerd[1507]: time="2025-07-09T09:54:30.322664880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 09:54:30.322701 containerd[1507]: time="2025-07-09T09:54:30.322674640Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 09:54:30.322701 containerd[1507]: time="2025-07-09T09:54:30.322683320Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 09:54:30.322701 containerd[1507]: time="2025-07-09T09:54:30.322695280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 09:54:30.322819 containerd[1507]: time="2025-07-09T09:54:30.322806000Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 09:54:30.322841 containerd[1507]: time="2025-07-09T09:54:30.322824680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 09:54:30.322859 containerd[1507]: time="2025-07-09T09:54:30.322839680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 09:54:30.322859 containerd[1507]: time="2025-07-09T09:54:30.322851320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 09:54:30.322893 containerd[1507]: time="2025-07-09T09:54:30.322861840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 09:54:30.322893 containerd[1507]: time="2025-07-09T09:54:30.322872880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 09:54:30.322893 containerd[1507]: time="2025-07-09T09:54:30.322883680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 09:54:30.322960 containerd[1507]: time="2025-07-09T09:54:30.322893600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 09:54:30.322960 containerd[1507]: time="2025-07-09T09:54:30.322904480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 09:54:30.322960 containerd[1507]: time="2025-07-09T09:54:30.322915280Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 09:54:30.322960 containerd[1507]: time="2025-07-09T09:54:30.322924800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 09:54:30.323137 containerd[1507]: time="2025-07-09T09:54:30.323105400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 09:54:30.323137 containerd[1507]: time="2025-07-09T09:54:30.323125880Z" level=info msg="Start snapshots syncer" Jul 9 09:54:30.323185 containerd[1507]: time="2025-07-09T09:54:30.323154200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 09:54:30.324192 containerd[1507]: time="2025-07-09T09:54:30.323784520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 09:54:30.324192 containerd[1507]: time="2025-07-09T09:54:30.323860440Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.323949000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324061280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324092440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324109440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324127440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324140920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324156280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324171080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324228960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 09:54:30.324333 containerd[1507]: time="2025-07-09T09:54:30.324249920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 09:54:30.324510 containerd[1507]: time="2025-07-09T09:54:30.324462320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 09:54:30.324529 containerd[1507]: time="2025-07-09T09:54:30.324509520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 09:54:30.324546 containerd[1507]: time="2025-07-09T09:54:30.324527400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 09:54:30.324546 containerd[1507]: time="2025-07-09T09:54:30.324538120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 09:54:30.324595 containerd[1507]: time="2025-07-09T09:54:30.324550720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 09:54:30.324595 containerd[1507]: time="2025-07-09T09:54:30.324561480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 09:54:30.324653 containerd[1507]: time="2025-07-09T09:54:30.324629080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 09:54:30.324687 containerd[1507]: time="2025-07-09T09:54:30.324650400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 09:54:30.324790 containerd[1507]: time="2025-07-09T09:54:30.324773320Z" level=info msg="runtime interface created" Jul 9 09:54:30.324879 containerd[1507]: time="2025-07-09T09:54:30.324862680Z" level=info msg="created NRI interface" Jul 9 09:54:30.324904 containerd[1507]: time="2025-07-09T09:54:30.324888440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 09:54:30.324927 containerd[1507]: time="2025-07-09T09:54:30.324905120Z" level=info msg="Connect containerd service" Jul 9 09:54:30.324947 containerd[1507]: time="2025-07-09T09:54:30.324938440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 09:54:30.325586 containerd[1507]: time="2025-07-09T09:54:30.325547280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 09:54:30.421111 containerd[1507]: time="2025-07-09T09:54:30.420982800Z" level=info msg="Start subscribing containerd event" Jul 9 09:54:30.421111 containerd[1507]: time="2025-07-09T09:54:30.421076080Z" level=info msg="Start recovering state" Jul 9 09:54:30.421347 containerd[1507]: time="2025-07-09T09:54:30.421315200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 09:54:30.421449 containerd[1507]: time="2025-07-09T09:54:30.421376520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 09:54:30.421449 containerd[1507]: time="2025-07-09T09:54:30.421316520Z" level=info msg="Start event monitor" Jul 9 09:54:30.423662 containerd[1507]: time="2025-07-09T09:54:30.423636880Z" level=info msg="Start cni network conf syncer for default" Jul 9 09:54:30.423662 containerd[1507]: time="2025-07-09T09:54:30.423660560Z" level=info msg="Start streaming server" Jul 9 09:54:30.425468 containerd[1507]: time="2025-07-09T09:54:30.425450880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 09:54:30.425468 containerd[1507]: time="2025-07-09T09:54:30.425467560Z" level=info msg="runtime interface starting up..." Jul 9 09:54:30.425527 containerd[1507]: time="2025-07-09T09:54:30.425475760Z" level=info msg="starting plugins..." Jul 9 09:54:30.425527 containerd[1507]: time="2025-07-09T09:54:30.425499280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 09:54:30.425657 containerd[1507]: time="2025-07-09T09:54:30.425640960Z" level=info msg="containerd successfully booted in 0.120594s" Jul 9 09:54:30.425772 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 09:54:30.610657 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 09:54:30.630631 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 09:54:30.633086 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 09:54:30.649992 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 09:54:30.651632 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 09:54:30.653987 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 09:54:30.670169 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 09:54:30.672590 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 09:54:30.674419 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 09:54:30.675491 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 09:54:31.659722 systemd-networkd[1424]: eth0: Gained IPv6LL Jul 9 09:54:31.663619 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 09:54:31.664950 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 09:54:31.667016 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 09:54:31.668943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:54:31.670845 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 09:54:31.693975 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 09:54:31.694235 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 09:54:31.696323 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 09:54:31.698008 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 09:54:32.197701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:54:32.198915 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 09:54:32.199822 systemd[1]: Startup finished in 2.049s (kernel) + 4.683s (initrd) + 3.950s (userspace) = 10.683s. Jul 9 09:54:32.202287 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:54:32.582948 kubelet[1606]: E0709 09:54:32.582847 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:54:32.585389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:54:32.585521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:54:32.586069 systemd[1]: kubelet.service: Consumed 786ms CPU time, 257.5M memory peak. Jul 9 09:54:36.858582 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 09:54:36.859891 systemd[1]: Started sshd@0-10.0.0.66:22-10.0.0.1:53736.service - OpenSSH per-connection server daemon (10.0.0.1:53736). Jul 9 09:54:36.938906 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:36.940634 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:36.947053 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 09:54:36.947949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 09:54:36.953223 systemd-logind[1492]: New session 1 of user core. Jul 9 09:54:36.966607 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 09:54:36.969426 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 09:54:36.995512 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 09:54:36.997727 systemd-logind[1492]: New session c1 of user core. Jul 9 09:54:37.102668 systemd[1624]: Queued start job for default target default.target. Jul 9 09:54:37.120475 systemd[1624]: Created slice app.slice - User Application Slice. Jul 9 09:54:37.120504 systemd[1624]: Reached target paths.target - Paths. Jul 9 09:54:37.120543 systemd[1624]: Reached target timers.target - Timers. Jul 9 09:54:37.121736 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 09:54:37.130600 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 09:54:37.130657 systemd[1624]: Reached target sockets.target - Sockets. Jul 9 09:54:37.130694 systemd[1624]: Reached target basic.target - Basic System. Jul 9 09:54:37.130722 systemd[1624]: Reached target default.target - Main User Target. Jul 9 09:54:37.130747 systemd[1624]: Startup finished in 127ms. Jul 9 09:54:37.130818 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 09:54:37.132049 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 09:54:37.190201 systemd[1]: Started sshd@1-10.0.0.66:22-10.0.0.1:53742.service - OpenSSH per-connection server daemon (10.0.0.1:53742). Jul 9 09:54:37.241767 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 53742 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:37.242861 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:37.246412 systemd-logind[1492]: New session 2 of user core. Jul 9 09:54:37.260748 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 09:54:37.310962 sshd[1638]: Connection closed by 10.0.0.1 port 53742 Jul 9 09:54:37.311412 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jul 9 09:54:37.322220 systemd[1]: sshd@1-10.0.0.66:22-10.0.0.1:53742.service: Deactivated successfully. Jul 9 09:54:37.324813 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 09:54:37.325387 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Jul 9 09:54:37.327453 systemd[1]: Started sshd@2-10.0.0.66:22-10.0.0.1:53750.service - OpenSSH per-connection server daemon (10.0.0.1:53750). Jul 9 09:54:37.328036 systemd-logind[1492]: Removed session 2. Jul 9 09:54:37.376850 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 53750 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:37.377847 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:37.381593 systemd-logind[1492]: New session 3 of user core. Jul 9 09:54:37.390728 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 09:54:37.437637 sshd[1647]: Connection closed by 10.0.0.1 port 53750 Jul 9 09:54:37.437678 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jul 9 09:54:37.452475 systemd[1]: sshd@2-10.0.0.66:22-10.0.0.1:53750.service: Deactivated successfully. Jul 9 09:54:37.453866 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 09:54:37.456180 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Jul 9 09:54:37.458047 systemd[1]: Started sshd@3-10.0.0.66:22-10.0.0.1:53756.service - OpenSSH per-connection server daemon (10.0.0.1:53756). Jul 9 09:54:37.459102 systemd-logind[1492]: Removed session 3. Jul 9 09:54:37.506891 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 53756 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:37.508001 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:37.512423 systemd-logind[1492]: New session 4 of user core. Jul 9 09:54:37.523717 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 09:54:37.574535 sshd[1656]: Connection closed by 10.0.0.1 port 53756 Jul 9 09:54:37.574919 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jul 9 09:54:37.584399 systemd[1]: sshd@3-10.0.0.66:22-10.0.0.1:53756.service: Deactivated successfully. Jul 9 09:54:37.586775 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 09:54:37.588713 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Jul 9 09:54:37.590429 systemd[1]: Started sshd@4-10.0.0.66:22-10.0.0.1:53760.service - OpenSSH per-connection server daemon (10.0.0.1:53760). Jul 9 09:54:37.591400 systemd-logind[1492]: Removed session 4. Jul 9 09:54:37.641539 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 53760 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:37.642752 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:37.647426 systemd-logind[1492]: New session 5 of user core. Jul 9 09:54:37.656713 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 09:54:37.718097 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 09:54:37.718357 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:54:37.731405 sudo[1666]: pam_unix(sudo:session): session closed for user root Jul 9 09:54:37.732900 sshd[1665]: Connection closed by 10.0.0.1 port 53760 Jul 9 09:54:37.733220 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Jul 9 09:54:37.744346 systemd[1]: sshd@4-10.0.0.66:22-10.0.0.1:53760.service: Deactivated successfully. Jul 9 09:54:37.746365 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 09:54:37.747163 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Jul 9 09:54:37.749252 systemd[1]: Started sshd@5-10.0.0.66:22-10.0.0.1:53762.service - OpenSSH per-connection server daemon (10.0.0.1:53762). Jul 9 09:54:37.750442 systemd-logind[1492]: Removed session 5. Jul 9 09:54:37.795099 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 53762 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:37.796168 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:37.800313 systemd-logind[1492]: New session 6 of user core. Jul 9 09:54:37.806698 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 09:54:37.856810 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 09:54:37.857074 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:54:37.931477 sudo[1677]: pam_unix(sudo:session): session closed for user root Jul 9 09:54:37.936297 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 09:54:37.936543 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:54:37.945276 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 09:54:37.978709 augenrules[1699]: No rules Jul 9 09:54:37.979865 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 09:54:37.980651 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 09:54:37.981408 sudo[1676]: pam_unix(sudo:session): session closed for user root Jul 9 09:54:37.982434 sshd[1675]: Connection closed by 10.0.0.1 port 53762 Jul 9 09:54:37.982840 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jul 9 09:54:37.989321 systemd[1]: sshd@5-10.0.0.66:22-10.0.0.1:53762.service: Deactivated successfully. Jul 9 09:54:37.990655 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 09:54:37.991261 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Jul 9 09:54:37.993132 systemd[1]: Started sshd@6-10.0.0.66:22-10.0.0.1:53778.service - OpenSSH per-connection server daemon (10.0.0.1:53778). Jul 9 09:54:37.995641 systemd-logind[1492]: Removed session 6. Jul 9 09:54:38.033078 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53778 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 09:54:38.034173 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:54:38.037681 systemd-logind[1492]: New session 7 of user core. Jul 9 09:54:38.043705 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 09:54:38.092065 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 09:54:38.092328 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:54:38.102321 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 09:54:38.132543 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 09:54:38.132827 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 09:54:38.543915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:54:38.544052 systemd[1]: kubelet.service: Consumed 786ms CPU time, 257.5M memory peak. Jul 9 09:54:38.545882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:54:38.563524 systemd[1]: Reload requested from client PID 1752 ('systemctl') (unit session-7.scope)... Jul 9 09:54:38.563537 systemd[1]: Reloading... Jul 9 09:54:38.622599 zram_generator::config[1791]: No configuration found. Jul 9 09:54:38.731841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:54:38.815004 systemd[1]: Reloading finished in 251 ms. Jul 9 09:54:38.874061 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 09:54:38.874146 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 09:54:38.875618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:54:38.875666 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Jul 9 09:54:38.877263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:54:38.988184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:54:38.991626 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 09:54:39.024446 kubelet[1840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:54:39.024446 kubelet[1840]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 09:54:39.024446 kubelet[1840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:54:39.024765 kubelet[1840]: I0709 09:54:39.024482 1840 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 09:54:39.667950 kubelet[1840]: I0709 09:54:39.667883 1840 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 09:54:39.667950 kubelet[1840]: I0709 09:54:39.667942 1840 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 09:54:39.668701 kubelet[1840]: I0709 09:54:39.668507 1840 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 09:54:39.720204 kubelet[1840]: I0709 09:54:39.720103 1840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 09:54:39.729014 kubelet[1840]: I0709 09:54:39.728993 1840 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 09:54:39.731888 kubelet[1840]: I0709 09:54:39.731865 1840 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 09:54:39.732093 kubelet[1840]: I0709 09:54:39.732060 1840 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 09:54:39.732245 kubelet[1840]: I0709 09:54:39.732086 1840 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.66","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 09:54:39.732329 kubelet[1840]: I0709 09:54:39.732310 1840 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 09:54:39.732329 kubelet[1840]: I0709 09:54:39.732319 1840 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 09:54:39.732534 kubelet[1840]: I0709 09:54:39.732511 1840 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:54:39.735007 kubelet[1840]: I0709 09:54:39.734976 1840 kubelet.go:446] "Attempting to sync node with API server" Jul 9 09:54:39.735007 kubelet[1840]: I0709 09:54:39.735001 1840 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 09:54:39.736646 kubelet[1840]: I0709 09:54:39.736619 1840 kubelet.go:352] "Adding apiserver pod source" Jul 9 09:54:39.736646 kubelet[1840]: I0709 09:54:39.736647 1840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 09:54:39.736832 kubelet[1840]: E0709 09:54:39.736732 1840 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:39.736832 kubelet[1840]: E0709 09:54:39.736788 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:39.739636 kubelet[1840]: I0709 09:54:39.739599 1840 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 09:54:39.742154 kubelet[1840]: I0709 09:54:39.740887 1840 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 09:54:39.742154 kubelet[1840]: W0709 09:54:39.741015 1840 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 09:54:39.742154 kubelet[1840]: I0709 09:54:39.741945 1840 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 09:54:39.742154 kubelet[1840]: I0709 09:54:39.741977 1840 server.go:1287] "Started kubelet" Jul 9 09:54:39.744691 kubelet[1840]: I0709 09:54:39.744513 1840 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 09:54:39.745000 kubelet[1840]: I0709 09:54:39.744945 1840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 09:54:39.745277 kubelet[1840]: I0709 09:54:39.745247 1840 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 09:54:39.746383 kubelet[1840]: I0709 09:54:39.745645 1840 server.go:479] "Adding debug handlers to kubelet server" Jul 9 09:54:39.750397 kubelet[1840]: I0709 09:54:39.747550 1840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 09:54:39.750397 kubelet[1840]: I0709 09:54:39.748036 1840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 09:54:39.750397 kubelet[1840]: W0709 09:54:39.748281 1840 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 9 09:54:39.750397 kubelet[1840]: E0709 09:54:39.748310 1840 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 9 09:54:39.750397 kubelet[1840]: E0709 09:54:39.748924 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:39.750397 kubelet[1840]: I0709 09:54:39.748953 1840 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 09:54:39.750397 kubelet[1840]: I0709 09:54:39.749108 1840 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 09:54:39.750397 kubelet[1840]: I0709 09:54:39.749154 1840 reconciler.go:26] "Reconciler: start to sync state" Jul 9 09:54:39.751059 kubelet[1840]: I0709 09:54:39.751025 1840 factory.go:221] Registration of the systemd container factory successfully Jul 9 09:54:39.751126 kubelet[1840]: I0709 09:54:39.751115 1840 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 09:54:39.752050 kubelet[1840]: E0709 09:54:39.752025 1840 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 09:54:39.752332 kubelet[1840]: I0709 09:54:39.752307 1840 factory.go:221] Registration of the containerd container factory successfully Jul 9 09:54:39.754850 kubelet[1840]: E0709 09:54:39.754822 1840 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.66\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 9 09:54:39.755203 kubelet[1840]: E0709 09:54:39.754971 1840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.66.18508ca15e02564b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.66,UID:10.0.0.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.66,},FirstTimestamp:2025-07-09 09:54:39.741957707 +0000 UTC m=+0.747658503,LastTimestamp:2025-07-09 09:54:39.741957707 +0000 UTC m=+0.747658503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.66,}" Jul 9 09:54:39.755492 kubelet[1840]: W0709 09:54:39.755471 1840 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.66" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 9 09:54:39.755622 kubelet[1840]: E0709 09:54:39.755591 1840 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.66\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 9 09:54:39.755773 kubelet[1840]: W0709 09:54:39.755756 1840 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 9 09:54:39.755863 kubelet[1840]: E0709 09:54:39.755842 1840 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jul 9 09:54:39.757675 kubelet[1840]: E0709 09:54:39.757595 1840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.66.18508ca15e9bc82a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.66,UID:10.0.0.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.66,},FirstTimestamp:2025-07-09 09:54:39.752013866 +0000 UTC m=+0.757714662,LastTimestamp:2025-07-09 09:54:39.752013866 +0000 UTC m=+0.757714662,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.66,}" Jul 9 09:54:39.761232 kubelet[1840]: I0709 09:54:39.761205 1840 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 09:54:39.761232 kubelet[1840]: I0709 09:54:39.761219 1840 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 09:54:39.761321 kubelet[1840]: I0709 09:54:39.761245 1840 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:54:39.849406 kubelet[1840]: E0709 09:54:39.849357 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:39.856570 kubelet[1840]: I0709 09:54:39.856229 1840 policy_none.go:49] "None policy: Start" Jul 9 09:54:39.856570 kubelet[1840]: I0709 09:54:39.856255 1840 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 09:54:39.856570 kubelet[1840]: I0709 09:54:39.856267 1840 state_mem.go:35] "Initializing new in-memory state store" Jul 9 09:54:39.863822 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 09:54:39.882427 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 09:54:39.885515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 09:54:39.899452 kubelet[1840]: I0709 09:54:39.899408 1840 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 09:54:39.899676 kubelet[1840]: I0709 09:54:39.899646 1840 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 09:54:39.899723 kubelet[1840]: I0709 09:54:39.899666 1840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 09:54:39.900326 kubelet[1840]: I0709 09:54:39.899954 1840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 09:54:39.900661 kubelet[1840]: I0709 09:54:39.900637 1840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 09:54:39.902314 kubelet[1840]: E0709 09:54:39.902286 1840 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 09:54:39.902377 kubelet[1840]: E0709 09:54:39.902332 1840 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.66\" not found" Jul 9 09:54:39.902377 kubelet[1840]: I0709 09:54:39.902372 1840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 09:54:39.902514 kubelet[1840]: I0709 09:54:39.902388 1840 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 09:54:39.902514 kubelet[1840]: I0709 09:54:39.902488 1840 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 09:54:39.902514 kubelet[1840]: I0709 09:54:39.902498 1840 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 09:54:39.902634 kubelet[1840]: E0709 09:54:39.902588 1840 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 9 09:54:39.961958 kubelet[1840]: E0709 09:54:39.961907 1840 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.66\" not found" node="10.0.0.66" Jul 9 09:54:40.001197 kubelet[1840]: I0709 09:54:40.001149 1840 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.66" Jul 9 09:54:40.007272 kubelet[1840]: I0709 09:54:40.007143 1840 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.66" Jul 9 09:54:40.007272 kubelet[1840]: E0709 09:54:40.007170 1840 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.66\": node \"10.0.0.66\" not found" Jul 9 09:54:40.033259 kubelet[1840]: E0709 09:54:40.033228 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.048509 sudo[1712]: pam_unix(sudo:session): session closed for user root Jul 9 09:54:40.049847 sshd[1711]: Connection closed by 10.0.0.1 port 53778 Jul 9 09:54:40.050340 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 9 09:54:40.054447 systemd[1]: sshd@6-10.0.0.66:22-10.0.0.1:53778.service: Deactivated successfully. Jul 9 09:54:40.056552 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 09:54:40.056823 systemd[1]: session-7.scope: Consumed 426ms CPU time, 74.3M memory peak. Jul 9 09:54:40.058448 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Jul 9 09:54:40.060561 systemd-logind[1492]: Removed session 7. Jul 9 09:54:40.134296 kubelet[1840]: E0709 09:54:40.134251 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.235235 kubelet[1840]: E0709 09:54:40.234852 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.335534 kubelet[1840]: E0709 09:54:40.335491 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.435973 kubelet[1840]: E0709 09:54:40.435934 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.536486 kubelet[1840]: E0709 09:54:40.536395 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.636962 kubelet[1840]: E0709 09:54:40.636909 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.670358 kubelet[1840]: I0709 09:54:40.670322 1840 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 9 09:54:40.670519 kubelet[1840]: W0709 09:54:40.670497 1840 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 9 09:54:40.737721 kubelet[1840]: E0709 09:54:40.737647 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.737721 kubelet[1840]: E0709 09:54:40.737697 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:40.838809 kubelet[1840]: E0709 09:54:40.838696 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:40.939474 kubelet[1840]: E0709 09:54:40.939433 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:41.039872 kubelet[1840]: E0709 09:54:41.039814 1840 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.66\" not found" Jul 9 09:54:41.141457 kubelet[1840]: I0709 09:54:41.141353 1840 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 9 09:54:41.141822 containerd[1507]: time="2025-07-09T09:54:41.141772099Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 09:54:41.142196 kubelet[1840]: I0709 09:54:41.141950 1840 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 9 09:54:41.738147 kubelet[1840]: I0709 09:54:41.738102 1840 apiserver.go:52] "Watching apiserver" Jul 9 09:54:41.738255 kubelet[1840]: E0709 09:54:41.738107 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:41.742483 kubelet[1840]: E0709 09:54:41.742434 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:41.749892 kubelet[1840]: I0709 09:54:41.749844 1840 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 09:54:41.750710 systemd[1]: Created slice kubepods-besteffort-podb3bb5adb_7cb7_4737_b1b1_a758eed80e86.slice - libcontainer container kubepods-besteffort-podb3bb5adb_7cb7_4737_b1b1_a758eed80e86.slice. Jul 9 09:54:41.759935 kubelet[1840]: I0709 09:54:41.759889 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hx2w\" (UniqueName: \"kubernetes.io/projected/b3bb5adb-7cb7-4737-b1b1-a758eed80e86-kube-api-access-7hx2w\") pod \"kube-proxy-9stlk\" (UID: \"b3bb5adb-7cb7-4737-b1b1-a758eed80e86\") " pod="kube-system/kube-proxy-9stlk" Jul 9 09:54:41.759935 kubelet[1840]: I0709 09:54:41.759932 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-cni-net-dir\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760066 kubelet[1840]: I0709 09:54:41.759959 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-var-run-calico\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760066 kubelet[1840]: I0709 09:54:41.759996 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3bb5adb-7cb7-4737-b1b1-a758eed80e86-xtables-lock\") pod \"kube-proxy-9stlk\" (UID: \"b3bb5adb-7cb7-4737-b1b1-a758eed80e86\") " pod="kube-system/kube-proxy-9stlk" Jul 9 09:54:41.760066 kubelet[1840]: I0709 09:54:41.760028 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17e20abd-c58c-45cb-960e-cc4c34878a0d-socket-dir\") pod \"csi-node-driver-s5fkc\" (UID: \"17e20abd-c58c-45cb-960e-cc4c34878a0d\") " pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:54:41.760066 kubelet[1840]: I0709 09:54:41.760052 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmlc\" (UniqueName: \"kubernetes.io/projected/17e20abd-c58c-45cb-960e-cc4c34878a0d-kube-api-access-4fmlc\") pod \"csi-node-driver-s5fkc\" (UID: \"17e20abd-c58c-45cb-960e-cc4c34878a0d\") " pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:54:41.760155 kubelet[1840]: I0709 09:54:41.760070 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-cni-bin-dir\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760155 kubelet[1840]: I0709 09:54:41.760086 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-flexvol-driver-host\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760155 kubelet[1840]: I0709 09:54:41.760103 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-tigera-ca-bundle\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760155 kubelet[1840]: I0709 09:54:41.760119 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-xtables-lock\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760155 kubelet[1840]: I0709 09:54:41.760134 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17e20abd-c58c-45cb-960e-cc4c34878a0d-registration-dir\") pod \"csi-node-driver-s5fkc\" (UID: \"17e20abd-c58c-45cb-960e-cc4c34878a0d\") " pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:54:41.760250 kubelet[1840]: I0709 09:54:41.760171 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3bb5adb-7cb7-4737-b1b1-a758eed80e86-kube-proxy\") pod \"kube-proxy-9stlk\" (UID: \"b3bb5adb-7cb7-4737-b1b1-a758eed80e86\") " pod="kube-system/kube-proxy-9stlk" Jul 9 09:54:41.760250 kubelet[1840]: I0709 09:54:41.760197 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-node-certs\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760250 kubelet[1840]: I0709 09:54:41.760213 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-policysync\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760250 kubelet[1840]: I0709 09:54:41.760228 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfdv7\" (UniqueName: \"kubernetes.io/projected/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-kube-api-access-zfdv7\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760250 kubelet[1840]: I0709 09:54:41.760244 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17e20abd-c58c-45cb-960e-cc4c34878a0d-kubelet-dir\") pod \"csi-node-driver-s5fkc\" (UID: \"17e20abd-c58c-45cb-960e-cc4c34878a0d\") " pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:54:41.760338 kubelet[1840]: I0709 09:54:41.760260 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3bb5adb-7cb7-4737-b1b1-a758eed80e86-lib-modules\") pod \"kube-proxy-9stlk\" (UID: \"b3bb5adb-7cb7-4737-b1b1-a758eed80e86\") " pod="kube-system/kube-proxy-9stlk" Jul 9 09:54:41.760338 kubelet[1840]: I0709 09:54:41.760285 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-cni-log-dir\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760338 kubelet[1840]: I0709 09:54:41.760302 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-lib-modules\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760338 kubelet[1840]: I0709 09:54:41.760316 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c68bb1dc-425c-43b4-a152-b25a4e5d6c4b-var-lib-calico\") pod \"calico-node-ppwqj\" (UID: \"c68bb1dc-425c-43b4-a152-b25a4e5d6c4b\") " pod="calico-system/calico-node-ppwqj" Jul 9 09:54:41.760338 kubelet[1840]: I0709 09:54:41.760333 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/17e20abd-c58c-45cb-960e-cc4c34878a0d-varrun\") pod \"csi-node-driver-s5fkc\" (UID: \"17e20abd-c58c-45cb-960e-cc4c34878a0d\") " pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:54:41.777681 systemd[1]: Created slice kubepods-besteffort-podc68bb1dc_425c_43b4_a152_b25a4e5d6c4b.slice - libcontainer container kubepods-besteffort-podc68bb1dc_425c_43b4_a152_b25a4e5d6c4b.slice. Jul 9 09:54:41.866250 kubelet[1840]: E0709 09:54:41.866049 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:54:41.866250 kubelet[1840]: W0709 09:54:41.866079 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:54:41.866250 kubelet[1840]: E0709 09:54:41.866101 1840 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:54:41.866411 kubelet[1840]: E0709 09:54:41.866298 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:54:41.866411 kubelet[1840]: W0709 09:54:41.866307 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:54:41.866411 kubelet[1840]: E0709 09:54:41.866316 1840 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:54:41.875204 kubelet[1840]: E0709 09:54:41.874967 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:54:41.875204 kubelet[1840]: W0709 09:54:41.874990 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:54:41.875204 kubelet[1840]: E0709 09:54:41.875015 1840 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:54:41.875204 kubelet[1840]: E0709 09:54:41.875205 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:54:41.875204 kubelet[1840]: W0709 09:54:41.875215 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:54:41.875412 kubelet[1840]: E0709 09:54:41.875224 1840 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:54:41.878544 kubelet[1840]: E0709 09:54:41.878522 1840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:54:41.878544 kubelet[1840]: W0709 09:54:41.878540 1840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:54:41.878649 kubelet[1840]: E0709 09:54:41.878553 1840 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:54:42.077058 containerd[1507]: time="2025-07-09T09:54:42.076943573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9stlk,Uid:b3bb5adb-7cb7-4737-b1b1-a758eed80e86,Namespace:kube-system,Attempt:0,}" Jul 9 09:54:42.080464 containerd[1507]: time="2025-07-09T09:54:42.080423414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ppwqj,Uid:c68bb1dc-425c-43b4-a152-b25a4e5d6c4b,Namespace:calico-system,Attempt:0,}" Jul 9 09:54:42.530176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523983546.mount: Deactivated successfully. Jul 9 09:54:42.536690 containerd[1507]: time="2025-07-09T09:54:42.536645175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:54:42.538473 containerd[1507]: time="2025-07-09T09:54:42.538071937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:54:42.539098 containerd[1507]: time="2025-07-09T09:54:42.539073846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jul 9 09:54:42.541114 containerd[1507]: time="2025-07-09T09:54:42.541095211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 09:54:42.542068 containerd[1507]: time="2025-07-09T09:54:42.541662478Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:54:42.544750 containerd[1507]: time="2025-07-09T09:54:42.544711935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:54:42.545585 containerd[1507]: time="2025-07-09T09:54:42.545509674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 461.346341ms" Jul 9 09:54:42.546201 containerd[1507]: time="2025-07-09T09:54:42.546158513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 460.611593ms" Jul 9 09:54:42.562467 containerd[1507]: time="2025-07-09T09:54:42.562257414Z" level=info msg="connecting to shim 4c00fe874ee44de0977954bcaf2b9863bdecad4a080f4a3f8f4b8a8fb2414e7d" address="unix:///run/containerd/s/1bfdd1ee7dbf1cc05715f7e50c72f93c5292b3506afdd2807333624c1da6af0e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:54:42.562701 containerd[1507]: time="2025-07-09T09:54:42.562671563Z" level=info msg="connecting to shim 76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4" address="unix:///run/containerd/s/95cb75aeb0f0a264f08feafffe3886a633f4f583b971e8530e228dadef6fd79b" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:54:42.587763 systemd[1]: Started cri-containerd-76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4.scope - libcontainer container 76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4. Jul 9 09:54:42.590455 systemd[1]: Started cri-containerd-4c00fe874ee44de0977954bcaf2b9863bdecad4a080f4a3f8f4b8a8fb2414e7d.scope - libcontainer container 4c00fe874ee44de0977954bcaf2b9863bdecad4a080f4a3f8f4b8a8fb2414e7d. Jul 9 09:54:42.614386 containerd[1507]: time="2025-07-09T09:54:42.614343773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ppwqj,Uid:c68bb1dc-425c-43b4-a152-b25a4e5d6c4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\"" Jul 9 09:54:42.615297 containerd[1507]: time="2025-07-09T09:54:42.615165148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9stlk,Uid:b3bb5adb-7cb7-4737-b1b1-a758eed80e86,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c00fe874ee44de0977954bcaf2b9863bdecad4a080f4a3f8f4b8a8fb2414e7d\"" Jul 9 09:54:42.617598 containerd[1507]: time="2025-07-09T09:54:42.617502059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 09:54:42.738938 kubelet[1840]: E0709 09:54:42.738899 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:43.739426 kubelet[1840]: E0709 09:54:43.739361 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:43.902936 kubelet[1840]: E0709 09:54:43.902835 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:44.739864 kubelet[1840]: E0709 09:54:44.739823 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:45.740053 kubelet[1840]: E0709 09:54:45.740010 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:45.903345 kubelet[1840]: E0709 09:54:45.903303 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:46.741038 kubelet[1840]: E0709 09:54:46.740988 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:47.742158 kubelet[1840]: E0709 09:54:47.742113 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:47.902898 kubelet[1840]: E0709 09:54:47.902847 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:48.742713 kubelet[1840]: E0709 09:54:48.742663 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:49.742994 kubelet[1840]: E0709 09:54:49.742951 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:49.903530 kubelet[1840]: E0709 09:54:49.902799 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:50.744015 kubelet[1840]: E0709 09:54:50.743962 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:51.745195 kubelet[1840]: E0709 09:54:51.745146 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:51.906085 kubelet[1840]: E0709 09:54:51.906022 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:52.745670 kubelet[1840]: E0709 09:54:52.745622 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:53.746548 kubelet[1840]: E0709 09:54:53.746514 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:53.903760 kubelet[1840]: E0709 09:54:53.903680 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:54.747020 kubelet[1840]: E0709 09:54:54.746973 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:55.747127 kubelet[1840]: E0709 09:54:55.747083 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:55.903606 kubelet[1840]: E0709 09:54:55.903346 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:56.747656 kubelet[1840]: E0709 09:54:56.747567 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:57.748777 kubelet[1840]: E0709 09:54:57.748714 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:57.904589 kubelet[1840]: E0709 09:54:57.904244 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:54:58.749097 kubelet[1840]: E0709 09:54:58.749023 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:59.737049 kubelet[1840]: E0709 09:54:59.736967 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:59.749942 kubelet[1840]: E0709 09:54:59.749905 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:54:59.903656 kubelet[1840]: E0709 09:54:59.903612 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:00.750979 kubelet[1840]: E0709 09:55:00.750943 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:01.752194 kubelet[1840]: E0709 09:55:01.752105 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:01.903803 kubelet[1840]: E0709 09:55:01.903749 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:02.753024 kubelet[1840]: E0709 09:55:02.752957 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:03.753939 kubelet[1840]: E0709 09:55:03.753882 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:03.903494 kubelet[1840]: E0709 09:55:03.903440 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:04.754803 kubelet[1840]: E0709 09:55:04.754729 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:05.755498 kubelet[1840]: E0709 09:55:05.755441 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:05.905198 kubelet[1840]: E0709 09:55:05.905161 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:06.755988 kubelet[1840]: E0709 09:55:06.755925 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:07.469379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109431634.mount: Deactivated successfully. Jul 9 09:55:07.525481 containerd[1507]: time="2025-07-09T09:55:07.525217345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:07.526137 containerd[1507]: time="2025-07-09T09:55:07.525826958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 9 09:55:07.526780 containerd[1507]: time="2025-07-09T09:55:07.526742597Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:07.528639 containerd[1507]: time="2025-07-09T09:55:07.528605793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:07.529351 containerd[1507]: time="2025-07-09T09:55:07.529212846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 24.91167811s" Jul 9 09:55:07.529351 containerd[1507]: time="2025-07-09T09:55:07.529243005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 9 09:55:07.530862 containerd[1507]: time="2025-07-09T09:55:07.530836493Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 09:55:07.531937 containerd[1507]: time="2025-07-09T09:55:07.531906205Z" level=info msg="CreateContainer within sandbox \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 09:55:07.539527 containerd[1507]: time="2025-07-09T09:55:07.539462986Z" level=info msg="Container ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:55:07.550451 containerd[1507]: time="2025-07-09T09:55:07.549942675Z" level=info msg="CreateContainer within sandbox \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\"" Jul 9 09:55:07.551020 containerd[1507]: time="2025-07-09T09:55:07.550990788Z" level=info msg="StartContainer for \"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\"" Jul 9 09:55:07.552854 containerd[1507]: time="2025-07-09T09:55:07.552774908Z" level=info msg="connecting to shim ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53" address="unix:///run/containerd/s/95cb75aeb0f0a264f08feafffe3886a633f4f583b971e8530e228dadef6fd79b" protocol=ttrpc version=3 Jul 9 09:55:07.580775 systemd[1]: Started cri-containerd-ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53.scope - libcontainer container ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53. Jul 9 09:55:07.621270 containerd[1507]: time="2025-07-09T09:55:07.621234115Z" level=info msg="StartContainer for \"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\" returns successfully" Jul 9 09:55:07.645510 systemd[1]: cri-containerd-ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53.scope: Deactivated successfully. Jul 9 09:55:07.648751 containerd[1507]: time="2025-07-09T09:55:07.648659203Z" level=info msg="received exit event container_id:\"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\" id:\"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\" pid:2007 exited_at:{seconds:1752054907 nanos:648311299}" Jul 9 09:55:07.648935 containerd[1507]: time="2025-07-09T09:55:07.648889113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\" id:\"ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53\" pid:2007 exited_at:{seconds:1752054907 nanos:648311299}" Jul 9 09:55:07.757065 kubelet[1840]: E0709 09:55:07.756942 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:07.903405 kubelet[1840]: E0709 09:55:07.903355 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:08.449734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea3c1fd0e5af01b89b0f2eef77cb034986d6d907c63fa06006b52bd14f289e53-rootfs.mount: Deactivated successfully. Jul 9 09:55:08.507319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303561110.mount: Deactivated successfully. Jul 9 09:55:08.738633 containerd[1507]: time="2025-07-09T09:55:08.738514535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:08.739498 containerd[1507]: time="2025-07-09T09:55:08.739337740Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 9 09:55:08.740120 containerd[1507]: time="2025-07-09T09:55:08.740091468Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:08.742465 containerd[1507]: time="2025-07-09T09:55:08.742433528Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.211553717s" Jul 9 09:55:08.742616 containerd[1507]: time="2025-07-09T09:55:08.742567683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 9 09:55:08.742803 containerd[1507]: time="2025-07-09T09:55:08.742538604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:08.744095 containerd[1507]: time="2025-07-09T09:55:08.744071859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 09:55:08.746449 containerd[1507]: time="2025-07-09T09:55:08.746046335Z" level=info msg="CreateContainer within sandbox \"4c00fe874ee44de0977954bcaf2b9863bdecad4a080f4a3f8f4b8a8fb2414e7d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 09:55:08.752615 containerd[1507]: time="2025-07-09T09:55:08.752588337Z" level=info msg="Container 31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:55:08.755472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590504972.mount: Deactivated successfully. Jul 9 09:55:08.757442 kubelet[1840]: E0709 09:55:08.757411 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:08.759681 containerd[1507]: time="2025-07-09T09:55:08.759634998Z" level=info msg="CreateContainer within sandbox \"4c00fe874ee44de0977954bcaf2b9863bdecad4a080f4a3f8f4b8a8fb2414e7d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b\"" Jul 9 09:55:08.760085 containerd[1507]: time="2025-07-09T09:55:08.760026221Z" level=info msg="StartContainer for \"31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b\"" Jul 9 09:55:08.761519 containerd[1507]: time="2025-07-09T09:55:08.761468040Z" level=info msg="connecting to shim 31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b" address="unix:///run/containerd/s/1bfdd1ee7dbf1cc05715f7e50c72f93c5292b3506afdd2807333624c1da6af0e" protocol=ttrpc version=3 Jul 9 09:55:08.790726 systemd[1]: Started cri-containerd-31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b.scope - libcontainer container 31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b. Jul 9 09:55:08.823474 containerd[1507]: time="2025-07-09T09:55:08.823422688Z" level=info msg="StartContainer for \"31c6a24faf7c7dca4b6bcd567d6b8f36161b2c50d05c8cceb42b4f64e85e6b6b\" returns successfully" Jul 9 09:55:08.962570 kubelet[1840]: I0709 09:55:08.962501 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9stlk" podStartSLOduration=3.835599777 podStartE2EDuration="29.96248506s" podCreationTimestamp="2025-07-09 09:54:39 +0000 UTC" firstStartedPulling="2025-07-09 09:54:42.616897428 +0000 UTC m=+3.622598224" lastFinishedPulling="2025-07-09 09:55:08.743782711 +0000 UTC m=+29.749483507" observedRunningTime="2025-07-09 09:55:08.962309108 +0000 UTC m=+29.968009904" watchObservedRunningTime="2025-07-09 09:55:08.96248506 +0000 UTC m=+29.968185856" Jul 9 09:55:09.758071 kubelet[1840]: E0709 09:55:09.758031 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:09.903057 kubelet[1840]: E0709 09:55:09.902955 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:10.759249 kubelet[1840]: E0709 09:55:10.759205 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:11.759733 kubelet[1840]: E0709 09:55:11.759691 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:11.904492 kubelet[1840]: E0709 09:55:11.904388 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:12.760031 kubelet[1840]: E0709 09:55:12.759991 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:13.760532 kubelet[1840]: E0709 09:55:13.760486 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:13.903348 kubelet[1840]: E0709 09:55:13.903297 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:14.761260 kubelet[1840]: E0709 09:55:14.761195 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:15.523452 update_engine[1493]: I20250709 09:55:15.523344 1493 update_attempter.cc:509] Updating boot flags... Jul 9 09:55:15.761856 kubelet[1840]: E0709 09:55:15.761777 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:15.903197 kubelet[1840]: E0709 09:55:15.902996 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:16.761921 kubelet[1840]: E0709 09:55:16.761877 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:17.762349 kubelet[1840]: E0709 09:55:17.762263 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:17.904111 kubelet[1840]: E0709 09:55:17.903310 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:18.763341 kubelet[1840]: E0709 09:55:18.763277 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:19.737812 kubelet[1840]: E0709 09:55:19.737669 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:19.764412 kubelet[1840]: E0709 09:55:19.764342 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:19.905353 kubelet[1840]: E0709 09:55:19.905141 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:20.765426 kubelet[1840]: E0709 09:55:20.765378 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:21.767613 kubelet[1840]: E0709 09:55:21.765851 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:21.903960 kubelet[1840]: E0709 09:55:21.903909 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:22.354799 containerd[1507]: time="2025-07-09T09:55:22.354665802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:22.355925 containerd[1507]: time="2025-07-09T09:55:22.355898496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 9 09:55:22.356973 containerd[1507]: time="2025-07-09T09:55:22.356943555Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:22.358840 containerd[1507]: time="2025-07-09T09:55:22.358791076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:22.359382 containerd[1507]: time="2025-07-09T09:55:22.359347384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 13.61515865s" Jul 9 09:55:22.359423 containerd[1507]: time="2025-07-09T09:55:22.359384503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 9 09:55:22.362474 containerd[1507]: time="2025-07-09T09:55:22.362046128Z" level=info msg="CreateContainer within sandbox \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 09:55:22.376097 containerd[1507]: time="2025-07-09T09:55:22.376058394Z" level=info msg="Container 06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:55:22.383827 containerd[1507]: time="2025-07-09T09:55:22.383780553Z" level=info msg="CreateContainer within sandbox \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\"" Jul 9 09:55:22.384598 containerd[1507]: time="2025-07-09T09:55:22.384548897Z" level=info msg="StartContainer for \"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\"" Jul 9 09:55:22.386224 containerd[1507]: time="2025-07-09T09:55:22.386112104Z" level=info msg="connecting to shim 06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26" address="unix:///run/containerd/s/95cb75aeb0f0a264f08feafffe3886a633f4f583b971e8530e228dadef6fd79b" protocol=ttrpc version=3 Jul 9 09:55:22.405754 systemd[1]: Started cri-containerd-06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26.scope - libcontainer container 06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26. Jul 9 09:55:22.443210 containerd[1507]: time="2025-07-09T09:55:22.443124111Z" level=info msg="StartContainer for \"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\" returns successfully" Jul 9 09:55:22.766441 kubelet[1840]: E0709 09:55:22.766394 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:22.963516 containerd[1507]: time="2025-07-09T09:55:22.963465500Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 09:55:22.965247 systemd[1]: cri-containerd-06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26.scope: Deactivated successfully. Jul 9 09:55:22.965646 systemd[1]: cri-containerd-06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26.scope: Consumed 477ms CPU time, 185.9M memory peak, 165.8M written to disk. Jul 9 09:55:22.967101 containerd[1507]: time="2025-07-09T09:55:22.967068185Z" level=info msg="received exit event container_id:\"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\" id:\"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\" pid:2248 exited_at:{seconds:1752054922 nanos:966814910}" Jul 9 09:55:22.967201 containerd[1507]: time="2025-07-09T09:55:22.967072825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\" id:\"06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26\" pid:2248 exited_at:{seconds:1752054922 nanos:966814910}" Jul 9 09:55:22.979089 kubelet[1840]: I0709 09:55:22.979062 1840 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 09:55:22.986673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06ff0d2cfed9a6e5fbc6ad9cd078a5c13492dc4ae43b89ce4db3c6f842b1dc26-rootfs.mount: Deactivated successfully. Jul 9 09:55:23.037957 systemd[1]: Created slice kubepods-burstable-pod5b03f46f_c50c_4969_a08e_e3da216bd85b.slice - libcontainer container kubepods-burstable-pod5b03f46f_c50c_4969_a08e_e3da216bd85b.slice. Jul 9 09:55:23.053626 systemd[1]: Created slice kubepods-besteffort-pod6cdbffe2_a7ef_4b73_989e_6be39e6466bb.slice - libcontainer container kubepods-besteffort-pod6cdbffe2_a7ef_4b73_989e_6be39e6466bb.slice. Jul 9 09:55:23.061281 systemd[1]: Created slice kubepods-burstable-pod5d42a16a_2886_4936_a84a_3a3065394fcf.slice - libcontainer container kubepods-burstable-pod5d42a16a_2886_4936_a84a_3a3065394fcf.slice. Jul 9 09:55:23.082827 systemd[1]: Created slice kubepods-besteffort-pod30c6cf0f_2eb1_4f16_a311_6127f4214bc0.slice - libcontainer container kubepods-besteffort-pod30c6cf0f_2eb1_4f16_a311_6127f4214bc0.slice. Jul 9 09:55:23.087637 systemd[1]: Created slice kubepods-besteffort-podcca6a8b1_15c4_430d_b8f9_ad2e9c29c3df.slice - libcontainer container kubepods-besteffort-podcca6a8b1_15c4_430d_b8f9_ad2e9c29c3df.slice. Jul 9 09:55:23.091778 systemd[1]: Created slice kubepods-besteffort-pode27b3f7c_6fd6_4738_a77c_aa80afb44e50.slice - libcontainer container kubepods-besteffort-pode27b3f7c_6fd6_4738_a77c_aa80afb44e50.slice. Jul 9 09:55:23.095686 systemd[1]: Created slice kubepods-besteffort-podfcaa3a38_1ce8_43db_b5c0_9aa9a3dc3614.slice - libcontainer container kubepods-besteffort-podfcaa3a38_1ce8_43db_b5c0_9aa9a3dc3614.slice. Jul 9 09:55:23.131727 kubelet[1840]: I0709 09:55:23.131680 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-backend-key-pair\") pod \"whisker-6dd7689999-fgnwp\" (UID: \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\") " pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:23.131727 kubelet[1840]: I0709 09:55:23.131720 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8s6\" (UniqueName: \"kubernetes.io/projected/cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df-kube-api-access-jp8s6\") pod \"calico-apiserver-6ff7476b68-8zv5b\" (UID: \"cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df\") " pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" Jul 9 09:55:23.131892 kubelet[1840]: I0709 09:55:23.131740 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb67b5544-mqk5b\" (UID: \"fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614\") " pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" Jul 9 09:55:23.131892 kubelet[1840]: I0709 09:55:23.131761 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ndj\" (UniqueName: \"kubernetes.io/projected/5b03f46f-c50c-4969-a08e-e3da216bd85b-kube-api-access-62ndj\") pod \"coredns-668d6bf9bc-g2tf5\" (UID: \"5b03f46f-c50c-4969-a08e-e3da216bd85b\") " pod="kube-system/coredns-668d6bf9bc-g2tf5" Jul 9 09:55:23.131892 kubelet[1840]: I0709 09:55:23.131778 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df-calico-apiserver-certs\") pod \"calico-apiserver-6ff7476b68-8zv5b\" (UID: \"cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df\") " pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" Jul 9 09:55:23.131892 kubelet[1840]: I0709 09:55:23.131841 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6cdbffe2-a7ef-4b73-989e-6be39e6466bb-goldmane-key-pair\") pod \"goldmane-768f4c5c69-sl8nm\" (UID: \"6cdbffe2-a7ef-4b73-989e-6be39e6466bb\") " pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:23.131892 kubelet[1840]: I0709 09:55:23.131877 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d42a16a-2886-4936-a84a-3a3065394fcf-config-volume\") pod \"coredns-668d6bf9bc-hqnt5\" (UID: \"5d42a16a-2886-4936-a84a-3a3065394fcf\") " pod="kube-system/coredns-668d6bf9bc-hqnt5" Jul 9 09:55:23.132000 kubelet[1840]: I0709 09:55:23.131928 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b03f46f-c50c-4969-a08e-e3da216bd85b-config-volume\") pod \"coredns-668d6bf9bc-g2tf5\" (UID: \"5b03f46f-c50c-4969-a08e-e3da216bd85b\") " pod="kube-system/coredns-668d6bf9bc-g2tf5" Jul 9 09:55:23.132000 kubelet[1840]: I0709 09:55:23.131951 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59g7l\" (UniqueName: \"kubernetes.io/projected/fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614-kube-api-access-59g7l\") pod \"calico-kube-controllers-6bb67b5544-mqk5b\" (UID: \"fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614\") " pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" Jul 9 09:55:23.132040 kubelet[1840]: I0709 09:55:23.132001 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdbffe2-a7ef-4b73-989e-6be39e6466bb-config\") pod \"goldmane-768f4c5c69-sl8nm\" (UID: \"6cdbffe2-a7ef-4b73-989e-6be39e6466bb\") " pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:23.132040 kubelet[1840]: I0709 09:55:23.132017 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cdbffe2-a7ef-4b73-989e-6be39e6466bb-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-sl8nm\" (UID: \"6cdbffe2-a7ef-4b73-989e-6be39e6466bb\") " pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:23.132040 kubelet[1840]: I0709 09:55:23.132035 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz49t\" (UniqueName: \"kubernetes.io/projected/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-kube-api-access-wz49t\") pod \"whisker-6dd7689999-fgnwp\" (UID: \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\") " pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:23.132102 kubelet[1840]: I0709 09:55:23.132052 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmx58\" (UniqueName: \"kubernetes.io/projected/6cdbffe2-a7ef-4b73-989e-6be39e6466bb-kube-api-access-vmx58\") pod \"goldmane-768f4c5c69-sl8nm\" (UID: \"6cdbffe2-a7ef-4b73-989e-6be39e6466bb\") " pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:23.132102 kubelet[1840]: I0709 09:55:23.132069 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9h98\" (UniqueName: \"kubernetes.io/projected/e27b3f7c-6fd6-4738-a77c-aa80afb44e50-kube-api-access-v9h98\") pod \"calico-apiserver-6ff7476b68-297zz\" (UID: \"e27b3f7c-6fd6-4738-a77c-aa80afb44e50\") " pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" Jul 9 09:55:23.132102 kubelet[1840]: I0709 09:55:23.132092 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-ca-bundle\") pod \"whisker-6dd7689999-fgnwp\" (UID: \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\") " pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:23.132161 kubelet[1840]: I0709 09:55:23.132108 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e27b3f7c-6fd6-4738-a77c-aa80afb44e50-calico-apiserver-certs\") pod \"calico-apiserver-6ff7476b68-297zz\" (UID: \"e27b3f7c-6fd6-4738-a77c-aa80afb44e50\") " pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" Jul 9 09:55:23.132161 kubelet[1840]: I0709 09:55:23.132127 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88kh\" (UniqueName: \"kubernetes.io/projected/5d42a16a-2886-4936-a84a-3a3065394fcf-kube-api-access-h88kh\") pod \"coredns-668d6bf9bc-hqnt5\" (UID: \"5d42a16a-2886-4936-a84a-3a3065394fcf\") " pod="kube-system/coredns-668d6bf9bc-hqnt5" Jul 9 09:55:23.350986 containerd[1507]: time="2025-07-09T09:55:23.350863392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2tf5,Uid:5b03f46f-c50c-4969-a08e-e3da216bd85b,Namespace:kube-system,Attempt:0,}" Jul 9 09:55:23.357932 containerd[1507]: time="2025-07-09T09:55:23.357693416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-sl8nm,Uid:6cdbffe2-a7ef-4b73-989e-6be39e6466bb,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:23.382828 containerd[1507]: time="2025-07-09T09:55:23.382753154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqnt5,Uid:5d42a16a-2886-4936-a84a-3a3065394fcf,Namespace:kube-system,Attempt:0,}" Jul 9 09:55:23.387128 containerd[1507]: time="2025-07-09T09:55:23.387092827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd7689999-fgnwp,Uid:30c6cf0f-2eb1-4f16-a311-6127f4214bc0,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:23.391648 containerd[1507]: time="2025-07-09T09:55:23.391526178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-8zv5b,Uid:cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:55:23.399285 containerd[1507]: time="2025-07-09T09:55:23.397036068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-297zz,Uid:e27b3f7c-6fd6-4738-a77c-aa80afb44e50,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:55:23.399495 containerd[1507]: time="2025-07-09T09:55:23.399395181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb67b5544-mqk5b,Uid:fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:23.471221 containerd[1507]: time="2025-07-09T09:55:23.471145185Z" level=error msg="Failed to destroy network for sandbox \"d0f91ee0eb7daa1f212da895b3613582235719c53be5d680f7ad9a093e9fc575\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.473159 systemd[1]: run-netns-cni\x2dba03a0d3\x2d1bbf\x2de7d0\x2ddb79\x2dc4912b8cf4a5.mount: Deactivated successfully. Jul 9 09:55:23.475431 containerd[1507]: time="2025-07-09T09:55:23.475370581Z" level=error msg="Failed to destroy network for sandbox \"607b21d7597fc0d4c41405749dd5cda9bd0093252d6687d8623ed506ed8be0aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.492757 containerd[1507]: time="2025-07-09T09:55:23.492700994Z" level=error msg="Failed to destroy network for sandbox \"797f679e3026008990cffaa40f8d8ceefa81c30ae0a34012279def5b1a9a1915\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.517041 containerd[1507]: time="2025-07-09T09:55:23.516971948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2tf5,Uid:5b03f46f-c50c-4969-a08e-e3da216bd85b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f91ee0eb7daa1f212da895b3613582235719c53be5d680f7ad9a093e9fc575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.518608 kubelet[1840]: E0709 09:55:23.517362 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f91ee0eb7daa1f212da895b3613582235719c53be5d680f7ad9a093e9fc575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.518608 kubelet[1840]: E0709 09:55:23.517942 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f91ee0eb7daa1f212da895b3613582235719c53be5d680f7ad9a093e9fc575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g2tf5" Jul 9 09:55:23.518608 kubelet[1840]: E0709 09:55:23.517967 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f91ee0eb7daa1f212da895b3613582235719c53be5d680f7ad9a093e9fc575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g2tf5" Jul 9 09:55:23.518767 containerd[1507]: time="2025-07-09T09:55:23.518027247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-sl8nm,Uid:6cdbffe2-a7ef-4b73-989e-6be39e6466bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b21d7597fc0d4c41405749dd5cda9bd0093252d6687d8623ed506ed8be0aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.518831 kubelet[1840]: E0709 09:55:23.518011 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g2tf5_kube-system(5b03f46f-c50c-4969-a08e-e3da216bd85b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g2tf5_kube-system(5b03f46f-c50c-4969-a08e-e3da216bd85b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0f91ee0eb7daa1f212da895b3613582235719c53be5d680f7ad9a093e9fc575\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g2tf5" podUID="5b03f46f-c50c-4969-a08e-e3da216bd85b" Jul 9 09:55:23.518831 kubelet[1840]: E0709 09:55:23.518316 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b21d7597fc0d4c41405749dd5cda9bd0093252d6687d8623ed506ed8be0aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.518831 kubelet[1840]: E0709 09:55:23.518342 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b21d7597fc0d4c41405749dd5cda9bd0093252d6687d8623ed506ed8be0aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:23.518956 kubelet[1840]: E0709 09:55:23.518356 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b21d7597fc0d4c41405749dd5cda9bd0093252d6687d8623ed506ed8be0aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:23.518956 kubelet[1840]: E0709 09:55:23.518380 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-sl8nm_calico-system(6cdbffe2-a7ef-4b73-989e-6be39e6466bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-sl8nm_calico-system(6cdbffe2-a7ef-4b73-989e-6be39e6466bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607b21d7597fc0d4c41405749dd5cda9bd0093252d6687d8623ed506ed8be0aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-sl8nm" podUID="6cdbffe2-a7ef-4b73-989e-6be39e6466bb" Jul 9 09:55:23.521665 containerd[1507]: time="2025-07-09T09:55:23.521278622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqnt5,Uid:5d42a16a-2886-4936-a84a-3a3065394fcf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"797f679e3026008990cffaa40f8d8ceefa81c30ae0a34012279def5b1a9a1915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.521920 kubelet[1840]: E0709 09:55:23.521875 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797f679e3026008990cffaa40f8d8ceefa81c30ae0a34012279def5b1a9a1915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.522048 kubelet[1840]: E0709 09:55:23.521941 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797f679e3026008990cffaa40f8d8ceefa81c30ae0a34012279def5b1a9a1915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqnt5" Jul 9 09:55:23.522048 kubelet[1840]: E0709 09:55:23.521959 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797f679e3026008990cffaa40f8d8ceefa81c30ae0a34012279def5b1a9a1915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqnt5" Jul 9 09:55:23.522048 kubelet[1840]: E0709 09:55:23.521999 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hqnt5_kube-system(5d42a16a-2886-4936-a84a-3a3065394fcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hqnt5_kube-system(5d42a16a-2886-4936-a84a-3a3065394fcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"797f679e3026008990cffaa40f8d8ceefa81c30ae0a34012279def5b1a9a1915\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hqnt5" podUID="5d42a16a-2886-4936-a84a-3a3065394fcf" Jul 9 09:55:23.574882 containerd[1507]: time="2025-07-09T09:55:23.574826230Z" level=error msg="Failed to destroy network for sandbox \"733b3c60ca8696c4a39b45d32cb971b4ce02d91e0262bf1bef9a89dfa35851ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.575527 containerd[1507]: time="2025-07-09T09:55:23.575495137Z" level=error msg="Failed to destroy network for sandbox \"76350798225dd969ab4b152a4be055584deb790f5465f58374ba8b4394d38bb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.576018 containerd[1507]: time="2025-07-09T09:55:23.575867809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd7689999-fgnwp,Uid:30c6cf0f-2eb1-4f16-a311-6127f4214bc0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b3c60ca8696c4a39b45d32cb971b4ce02d91e0262bf1bef9a89dfa35851ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.576896 containerd[1507]: time="2025-07-09T09:55:23.576683753Z" level=error msg="Failed to destroy network for sandbox \"ee0e90f1c48ce0417211de81a8314c77122253ce351e0b4a97d2ecdf01b550de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.576927 kubelet[1840]: E0709 09:55:23.576222 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b3c60ca8696c4a39b45d32cb971b4ce02d91e0262bf1bef9a89dfa35851ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.576927 kubelet[1840]: E0709 09:55:23.576281 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b3c60ca8696c4a39b45d32cb971b4ce02d91e0262bf1bef9a89dfa35851ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:23.576927 kubelet[1840]: E0709 09:55:23.576299 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b3c60ca8696c4a39b45d32cb971b4ce02d91e0262bf1bef9a89dfa35851ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:23.577004 kubelet[1840]: E0709 09:55:23.576350 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd7689999-fgnwp_calico-system(30c6cf0f-2eb1-4f16-a311-6127f4214bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd7689999-fgnwp_calico-system(30c6cf0f-2eb1-4f16-a311-6127f4214bc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"733b3c60ca8696c4a39b45d32cb971b4ce02d91e0262bf1bef9a89dfa35851ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd7689999-fgnwp" podUID="30c6cf0f-2eb1-4f16-a311-6127f4214bc0" Jul 9 09:55:23.578439 containerd[1507]: time="2025-07-09T09:55:23.578201163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-8zv5b,Uid:cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76350798225dd969ab4b152a4be055584deb790f5465f58374ba8b4394d38bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.578763 kubelet[1840]: E0709 09:55:23.578703 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76350798225dd969ab4b152a4be055584deb790f5465f58374ba8b4394d38bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.578763 kubelet[1840]: E0709 09:55:23.578754 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76350798225dd969ab4b152a4be055584deb790f5465f58374ba8b4394d38bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" Jul 9 09:55:23.578854 kubelet[1840]: E0709 09:55:23.578772 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76350798225dd969ab4b152a4be055584deb790f5465f58374ba8b4394d38bb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" Jul 9 09:55:23.578854 kubelet[1840]: E0709 09:55:23.578833 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff7476b68-8zv5b_calico-apiserver(cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff7476b68-8zv5b_calico-apiserver(cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76350798225dd969ab4b152a4be055584deb790f5465f58374ba8b4394d38bb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" podUID="cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df" Jul 9 09:55:23.579489 containerd[1507]: time="2025-07-09T09:55:23.579428098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-297zz,Uid:e27b3f7c-6fd6-4738-a77c-aa80afb44e50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee0e90f1c48ce0417211de81a8314c77122253ce351e0b4a97d2ecdf01b550de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.579806 kubelet[1840]: E0709 09:55:23.579597 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee0e90f1c48ce0417211de81a8314c77122253ce351e0b4a97d2ecdf01b550de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.579806 kubelet[1840]: E0709 09:55:23.579638 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee0e90f1c48ce0417211de81a8314c77122253ce351e0b4a97d2ecdf01b550de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" Jul 9 09:55:23.579806 kubelet[1840]: E0709 09:55:23.579653 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee0e90f1c48ce0417211de81a8314c77122253ce351e0b4a97d2ecdf01b550de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" Jul 9 09:55:23.579921 kubelet[1840]: E0709 09:55:23.579690 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff7476b68-297zz_calico-apiserver(e27b3f7c-6fd6-4738-a77c-aa80afb44e50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff7476b68-297zz_calico-apiserver(e27b3f7c-6fd6-4738-a77c-aa80afb44e50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee0e90f1c48ce0417211de81a8314c77122253ce351e0b4a97d2ecdf01b550de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" podUID="e27b3f7c-6fd6-4738-a77c-aa80afb44e50" Jul 9 09:55:23.581679 containerd[1507]: time="2025-07-09T09:55:23.581618174Z" level=error msg="Failed to destroy network for sandbox \"4e3a6d6eb8473e81a828d7d83a66245a7550438f247ad830b558bff149bf6fe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.582567 containerd[1507]: time="2025-07-09T09:55:23.582525676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb67b5544-mqk5b,Uid:fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3a6d6eb8473e81a828d7d83a66245a7550438f247ad830b558bff149bf6fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.582790 kubelet[1840]: E0709 09:55:23.582730 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3a6d6eb8473e81a828d7d83a66245a7550438f247ad830b558bff149bf6fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.582790 kubelet[1840]: E0709 09:55:23.582771 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3a6d6eb8473e81a828d7d83a66245a7550438f247ad830b558bff149bf6fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" Jul 9 09:55:23.582790 kubelet[1840]: E0709 09:55:23.582786 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3a6d6eb8473e81a828d7d83a66245a7550438f247ad830b558bff149bf6fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" Jul 9 09:55:23.582905 kubelet[1840]: E0709 09:55:23.582818 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb67b5544-mqk5b_calico-system(fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb67b5544-mqk5b_calico-system(fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e3a6d6eb8473e81a828d7d83a66245a7550438f247ad830b558bff149bf6fe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" podUID="fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614" Jul 9 09:55:23.766982 kubelet[1840]: E0709 09:55:23.766933 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:23.908816 systemd[1]: Created slice kubepods-besteffort-pod17e20abd_c58c_45cb_960e_cc4c34878a0d.slice - libcontainer container kubepods-besteffort-pod17e20abd_c58c_45cb_960e_cc4c34878a0d.slice. Jul 9 09:55:23.910753 containerd[1507]: time="2025-07-09T09:55:23.910720548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5fkc,Uid:17e20abd-c58c-45cb-960e-cc4c34878a0d,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:23.952594 containerd[1507]: time="2025-07-09T09:55:23.952519192Z" level=error msg="Failed to destroy network for sandbox \"e36ae9edfd1cdf21a2feb2b50a8db716ec536f992a01c822f98f00fd9be5f2e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.953545 containerd[1507]: time="2025-07-09T09:55:23.953491692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5fkc,Uid:17e20abd-c58c-45cb-960e-cc4c34878a0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36ae9edfd1cdf21a2feb2b50a8db716ec536f992a01c822f98f00fd9be5f2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.954101 kubelet[1840]: E0709 09:55:23.953713 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36ae9edfd1cdf21a2feb2b50a8db716ec536f992a01c822f98f00fd9be5f2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:23.954101 kubelet[1840]: E0709 09:55:23.953764 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36ae9edfd1cdf21a2feb2b50a8db716ec536f992a01c822f98f00fd9be5f2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:55:23.954101 kubelet[1840]: E0709 09:55:23.953783 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36ae9edfd1cdf21a2feb2b50a8db716ec536f992a01c822f98f00fd9be5f2e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:55:23.954224 kubelet[1840]: E0709 09:55:23.953819 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s5fkc_calico-system(17e20abd-c58c-45cb-960e-cc4c34878a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s5fkc_calico-system(17e20abd-c58c-45cb-960e-cc4c34878a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e36ae9edfd1cdf21a2feb2b50a8db716ec536f992a01c822f98f00fd9be5f2e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:23.982113 containerd[1507]: time="2025-07-09T09:55:23.982071840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 09:55:24.378724 systemd[1]: run-netns-cni\x2d328f3260\x2dc79b\x2d547b\x2d99b7\x2d662c5815be58.mount: Deactivated successfully. Jul 9 09:55:24.378812 systemd[1]: run-netns-cni\x2ddee166c0\x2da972\x2db670\x2df8c2\x2db11164874424.mount: Deactivated successfully. Jul 9 09:55:24.378858 systemd[1]: run-netns-cni\x2da2dfaae1\x2def02\x2dc987\x2d3a38\x2dcbe6c6e22a91.mount: Deactivated successfully. Jul 9 09:55:24.378900 systemd[1]: run-netns-cni\x2d1477fb82\x2d3d6c\x2dfcd1\x2d1ef4\x2ddc8801ac26d5.mount: Deactivated successfully. Jul 9 09:55:24.767171 kubelet[1840]: E0709 09:55:24.767094 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:25.767985 kubelet[1840]: E0709 09:55:25.767940 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:26.768132 kubelet[1840]: E0709 09:55:26.768072 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:27.572554 systemd[1]: Created slice kubepods-besteffort-pod005be67c_c5b2_491a_8b2a_f1b0dfe3a532.slice - libcontainer container kubepods-besteffort-pod005be67c_c5b2_491a_8b2a_f1b0dfe3a532.slice. Jul 9 09:55:27.658346 kubelet[1840]: I0709 09:55:27.658273 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbdx\" (UniqueName: \"kubernetes.io/projected/005be67c-c5b2-491a-8b2a-f1b0dfe3a532-kube-api-access-5rbdx\") pod \"nginx-deployment-7fcdb87857-8xtsz\" (UID: \"005be67c-c5b2-491a-8b2a-f1b0dfe3a532\") " pod="default/nginx-deployment-7fcdb87857-8xtsz" Jul 9 09:55:27.769161 kubelet[1840]: E0709 09:55:27.769116 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:27.875589 containerd[1507]: time="2025-07-09T09:55:27.875463534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8xtsz,Uid:005be67c-c5b2-491a-8b2a-f1b0dfe3a532,Namespace:default,Attempt:0,}" Jul 9 09:55:27.922914 containerd[1507]: time="2025-07-09T09:55:27.922850854Z" level=error msg="Failed to destroy network for sandbox \"a57b60579dba1682ca8ccf015b6244bb4a97b39afe361381335eb5d7ec96bf00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:27.923952 containerd[1507]: time="2025-07-09T09:55:27.923910436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8xtsz,Uid:005be67c-c5b2-491a-8b2a-f1b0dfe3a532,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b60579dba1682ca8ccf015b6244bb4a97b39afe361381335eb5d7ec96bf00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:27.924170 kubelet[1840]: E0709 09:55:27.924130 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b60579dba1682ca8ccf015b6244bb4a97b39afe361381335eb5d7ec96bf00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:27.924235 kubelet[1840]: E0709 09:55:27.924196 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b60579dba1682ca8ccf015b6244bb4a97b39afe361381335eb5d7ec96bf00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-8xtsz" Jul 9 09:55:27.924235 kubelet[1840]: E0709 09:55:27.924218 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57b60579dba1682ca8ccf015b6244bb4a97b39afe361381335eb5d7ec96bf00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-8xtsz" Jul 9 09:55:27.924307 kubelet[1840]: E0709 09:55:27.924261 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-8xtsz_default(005be67c-c5b2-491a-8b2a-f1b0dfe3a532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-8xtsz_default(005be67c-c5b2-491a-8b2a-f1b0dfe3a532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a57b60579dba1682ca8ccf015b6244bb4a97b39afe361381335eb5d7ec96bf00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-8xtsz" podUID="005be67c-c5b2-491a-8b2a-f1b0dfe3a532" Jul 9 09:55:27.924463 systemd[1]: run-netns-cni\x2d59b9b559\x2d6414\x2dc8b8\x2d5183\x2de5ee89b6d322.mount: Deactivated successfully. Jul 9 09:55:28.769276 kubelet[1840]: E0709 09:55:28.769224 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:29.769495 kubelet[1840]: E0709 09:55:29.769445 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:30.769738 kubelet[1840]: E0709 09:55:30.769686 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:31.770187 kubelet[1840]: E0709 09:55:31.770132 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:32.771185 kubelet[1840]: E0709 09:55:32.771138 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:33.771867 kubelet[1840]: E0709 09:55:33.771785 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:33.908845 containerd[1507]: time="2025-07-09T09:55:33.908808244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2tf5,Uid:5b03f46f-c50c-4969-a08e-e3da216bd85b,Namespace:kube-system,Attempt:0,}" Jul 9 09:55:33.952312 containerd[1507]: time="2025-07-09T09:55:33.952255099Z" level=error msg="Failed to destroy network for sandbox \"575f15499eb5fb9c19c4d731b32919f288b337034e36d406013c6e11f62fdac3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:33.953929 systemd[1]: run-netns-cni\x2d728e906e\x2dbd84\x2d87cc\x2dddb9\x2dc1995831620f.mount: Deactivated successfully. Jul 9 09:55:33.954879 containerd[1507]: time="2025-07-09T09:55:33.953909356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2tf5,Uid:5b03f46f-c50c-4969-a08e-e3da216bd85b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"575f15499eb5fb9c19c4d731b32919f288b337034e36d406013c6e11f62fdac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:33.954955 kubelet[1840]: E0709 09:55:33.954171 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575f15499eb5fb9c19c4d731b32919f288b337034e36d406013c6e11f62fdac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:33.954955 kubelet[1840]: E0709 09:55:33.954230 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575f15499eb5fb9c19c4d731b32919f288b337034e36d406013c6e11f62fdac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g2tf5" Jul 9 09:55:33.954955 kubelet[1840]: E0709 09:55:33.954249 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575f15499eb5fb9c19c4d731b32919f288b337034e36d406013c6e11f62fdac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g2tf5" Jul 9 09:55:33.955052 kubelet[1840]: E0709 09:55:33.954295 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g2tf5_kube-system(5b03f46f-c50c-4969-a08e-e3da216bd85b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g2tf5_kube-system(5b03f46f-c50c-4969-a08e-e3da216bd85b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575f15499eb5fb9c19c4d731b32919f288b337034e36d406013c6e11f62fdac3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g2tf5" podUID="5b03f46f-c50c-4969-a08e-e3da216bd85b" Jul 9 09:55:34.772933 kubelet[1840]: E0709 09:55:34.772884 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:34.904404 containerd[1507]: time="2025-07-09T09:55:34.904345482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-297zz,Uid:e27b3f7c-6fd6-4738-a77c-aa80afb44e50,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:55:34.904404 containerd[1507]: time="2025-07-09T09:55:34.904387001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-sl8nm,Uid:6cdbffe2-a7ef-4b73-989e-6be39e6466bb,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:34.904652 containerd[1507]: time="2025-07-09T09:55:34.904345122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb67b5544-mqk5b,Uid:fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:34.957864 containerd[1507]: time="2025-07-09T09:55:34.957799746Z" level=error msg="Failed to destroy network for sandbox \"3b67450e36da0ae37797e360f5335b2d7d11174d2f480345524f0f5da25b4ed8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.959600 containerd[1507]: time="2025-07-09T09:55:34.959548123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb67b5544-mqk5b,Uid:fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b67450e36da0ae37797e360f5335b2d7d11174d2f480345524f0f5da25b4ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.960221 kubelet[1840]: E0709 09:55:34.959878 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b67450e36da0ae37797e360f5335b2d7d11174d2f480345524f0f5da25b4ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.960221 kubelet[1840]: E0709 09:55:34.959934 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b67450e36da0ae37797e360f5335b2d7d11174d2f480345524f0f5da25b4ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" Jul 9 09:55:34.960221 kubelet[1840]: E0709 09:55:34.959963 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b67450e36da0ae37797e360f5335b2d7d11174d2f480345524f0f5da25b4ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" Jul 9 09:55:34.960327 kubelet[1840]: E0709 09:55:34.960021 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb67b5544-mqk5b_calico-system(fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb67b5544-mqk5b_calico-system(fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b67450e36da0ae37797e360f5335b2d7d11174d2f480345524f0f5da25b4ed8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" podUID="fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614" Jul 9 09:55:34.960521 systemd[1]: run-netns-cni\x2d80bb39ac\x2d12fb\x2dbec8\x2d309b\x2d04d7f62c2668.mount: Deactivated successfully. Jul 9 09:55:34.961316 containerd[1507]: time="2025-07-09T09:55:34.961273421Z" level=error msg="Failed to destroy network for sandbox \"07bb75d6aef765f4edb46d0b506300ee2487b71e3c992c965df035e67d5555d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.962988 systemd[1]: run-netns-cni\x2db8603944\x2d9be3\x2d7db0\x2d7989\x2d104d9c1ca90a.mount: Deactivated successfully. Jul 9 09:55:34.963169 containerd[1507]: time="2025-07-09T09:55:34.963129837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-297zz,Uid:e27b3f7c-6fd6-4738-a77c-aa80afb44e50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07bb75d6aef765f4edb46d0b506300ee2487b71e3c992c965df035e67d5555d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.963480 kubelet[1840]: E0709 09:55:34.963442 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07bb75d6aef765f4edb46d0b506300ee2487b71e3c992c965df035e67d5555d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.963536 kubelet[1840]: E0709 09:55:34.963516 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07bb75d6aef765f4edb46d0b506300ee2487b71e3c992c965df035e67d5555d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" Jul 9 09:55:34.963563 kubelet[1840]: E0709 09:55:34.963541 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07bb75d6aef765f4edb46d0b506300ee2487b71e3c992c965df035e67d5555d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" Jul 9 09:55:34.964753 kubelet[1840]: E0709 09:55:34.964668 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff7476b68-297zz_calico-apiserver(e27b3f7c-6fd6-4738-a77c-aa80afb44e50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff7476b68-297zz_calico-apiserver(e27b3f7c-6fd6-4738-a77c-aa80afb44e50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07bb75d6aef765f4edb46d0b506300ee2487b71e3c992c965df035e67d5555d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" podUID="e27b3f7c-6fd6-4738-a77c-aa80afb44e50" Jul 9 09:55:34.967038 containerd[1507]: time="2025-07-09T09:55:34.966990746Z" level=error msg="Failed to destroy network for sandbox \"640bda97e03664f5a029436dfed364c871e332187721b024a3b7bae6c7fbe15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.968163 containerd[1507]: time="2025-07-09T09:55:34.968115532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-sl8nm,Uid:6cdbffe2-a7ef-4b73-989e-6be39e6466bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"640bda97e03664f5a029436dfed364c871e332187721b024a3b7bae6c7fbe15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.968334 kubelet[1840]: E0709 09:55:34.968304 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640bda97e03664f5a029436dfed364c871e332187721b024a3b7bae6c7fbe15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:34.968388 kubelet[1840]: E0709 09:55:34.968346 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640bda97e03664f5a029436dfed364c871e332187721b024a3b7bae6c7fbe15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:34.968388 kubelet[1840]: E0709 09:55:34.968363 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640bda97e03664f5a029436dfed364c871e332187721b024a3b7bae6c7fbe15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-sl8nm" Jul 9 09:55:34.968466 kubelet[1840]: E0709 09:55:34.968393 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-sl8nm_calico-system(6cdbffe2-a7ef-4b73-989e-6be39e6466bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-sl8nm_calico-system(6cdbffe2-a7ef-4b73-989e-6be39e6466bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"640bda97e03664f5a029436dfed364c871e332187721b024a3b7bae6c7fbe15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-sl8nm" podUID="6cdbffe2-a7ef-4b73-989e-6be39e6466bb" Jul 9 09:55:34.968546 systemd[1]: run-netns-cni\x2d4883487f\x2d95b3\x2d7c40\x2d8f7f\x2d422988dae7b8.mount: Deactivated successfully. Jul 9 09:55:35.773375 kubelet[1840]: E0709 09:55:35.773323 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:35.904096 containerd[1507]: time="2025-07-09T09:55:35.903760053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqnt5,Uid:5d42a16a-2886-4936-a84a-3a3065394fcf,Namespace:kube-system,Attempt:0,}" Jul 9 09:55:35.947850 containerd[1507]: time="2025-07-09T09:55:35.947787659Z" level=error msg="Failed to destroy network for sandbox \"dd937c39d96894af5ce8eff68df08cbd97c09c626dbfd6362b9db34a871386f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:35.949682 containerd[1507]: time="2025-07-09T09:55:35.949637755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqnt5,Uid:5d42a16a-2886-4936-a84a-3a3065394fcf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd937c39d96894af5ce8eff68df08cbd97c09c626dbfd6362b9db34a871386f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:35.949892 systemd[1]: run-netns-cni\x2da9e6161b\x2ddbf0\x2d7922\x2d95b6\x2d06ea97cd5b88.mount: Deactivated successfully. Jul 9 09:55:35.949997 kubelet[1840]: E0709 09:55:35.949935 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd937c39d96894af5ce8eff68df08cbd97c09c626dbfd6362b9db34a871386f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:35.950045 kubelet[1840]: E0709 09:55:35.950011 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd937c39d96894af5ce8eff68df08cbd97c09c626dbfd6362b9db34a871386f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqnt5" Jul 9 09:55:35.950045 kubelet[1840]: E0709 09:55:35.950032 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd937c39d96894af5ce8eff68df08cbd97c09c626dbfd6362b9db34a871386f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqnt5" Jul 9 09:55:35.950108 kubelet[1840]: E0709 09:55:35.950076 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hqnt5_kube-system(5d42a16a-2886-4936-a84a-3a3065394fcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hqnt5_kube-system(5d42a16a-2886-4936-a84a-3a3065394fcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd937c39d96894af5ce8eff68df08cbd97c09c626dbfd6362b9db34a871386f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hqnt5" podUID="5d42a16a-2886-4936-a84a-3a3065394fcf" Jul 9 09:55:36.773730 kubelet[1840]: E0709 09:55:36.773673 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:37.774116 kubelet[1840]: E0709 09:55:37.774051 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:37.904150 containerd[1507]: time="2025-07-09T09:55:37.903755593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd7689999-fgnwp,Uid:30c6cf0f-2eb1-4f16-a311-6127f4214bc0,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:37.945352 containerd[1507]: time="2025-07-09T09:55:37.945310061Z" level=error msg="Failed to destroy network for sandbox \"b8d971d89b51db4041f4b3e1557bc7cc21ab29853cbbf64af84baee6e2a93404\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:37.947061 systemd[1]: run-netns-cni\x2d021d46f9\x2dc19f\x2da2a3\x2ded85\x2dd30e400617ce.mount: Deactivated successfully. Jul 9 09:55:37.948134 containerd[1507]: time="2025-07-09T09:55:37.948074508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd7689999-fgnwp,Uid:30c6cf0f-2eb1-4f16-a311-6127f4214bc0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d971d89b51db4041f4b3e1557bc7cc21ab29853cbbf64af84baee6e2a93404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:37.948387 kubelet[1840]: E0709 09:55:37.948333 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d971d89b51db4041f4b3e1557bc7cc21ab29853cbbf64af84baee6e2a93404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:37.948432 kubelet[1840]: E0709 09:55:37.948406 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d971d89b51db4041f4b3e1557bc7cc21ab29853cbbf64af84baee6e2a93404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:37.948460 kubelet[1840]: E0709 09:55:37.948448 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8d971d89b51db4041f4b3e1557bc7cc21ab29853cbbf64af84baee6e2a93404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd7689999-fgnwp" Jul 9 09:55:37.948522 kubelet[1840]: E0709 09:55:37.948493 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd7689999-fgnwp_calico-system(30c6cf0f-2eb1-4f16-a311-6127f4214bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd7689999-fgnwp_calico-system(30c6cf0f-2eb1-4f16-a311-6127f4214bc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8d971d89b51db4041f4b3e1557bc7cc21ab29853cbbf64af84baee6e2a93404\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd7689999-fgnwp" podUID="30c6cf0f-2eb1-4f16-a311-6127f4214bc0" Jul 9 09:55:38.774959 kubelet[1840]: E0709 09:55:38.774899 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:38.903991 containerd[1507]: time="2025-07-09T09:55:38.903827039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-8zv5b,Uid:cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:55:38.903991 containerd[1507]: time="2025-07-09T09:55:38.903876038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5fkc,Uid:17e20abd-c58c-45cb-960e-cc4c34878a0d,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:38.960024 containerd[1507]: time="2025-07-09T09:55:38.959979914Z" level=error msg="Failed to destroy network for sandbox \"cd109f0dd48b1aea96830d5a9f39e10f7e0f19b8a5ebd4b458cc66db84e970f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:38.961761 systemd[1]: run-netns-cni\x2d815f9f39\x2dda76\x2d8bb6\x2d29b0\x2da38469ad1b43.mount: Deactivated successfully. Jul 9 09:55:38.962754 containerd[1507]: time="2025-07-09T09:55:38.962437446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-8zv5b,Uid:cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd109f0dd48b1aea96830d5a9f39e10f7e0f19b8a5ebd4b458cc66db84e970f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:38.963055 kubelet[1840]: E0709 09:55:38.962863 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd109f0dd48b1aea96830d5a9f39e10f7e0f19b8a5ebd4b458cc66db84e970f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:38.963055 kubelet[1840]: E0709 09:55:38.962934 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd109f0dd48b1aea96830d5a9f39e10f7e0f19b8a5ebd4b458cc66db84e970f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" Jul 9 09:55:38.963055 kubelet[1840]: E0709 09:55:38.962955 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd109f0dd48b1aea96830d5a9f39e10f7e0f19b8a5ebd4b458cc66db84e970f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" Jul 9 09:55:38.963168 kubelet[1840]: E0709 09:55:38.962994 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ff7476b68-8zv5b_calico-apiserver(cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ff7476b68-8zv5b_calico-apiserver(cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd109f0dd48b1aea96830d5a9f39e10f7e0f19b8a5ebd4b458cc66db84e970f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" podUID="cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df" Jul 9 09:55:38.963551 containerd[1507]: time="2025-07-09T09:55:38.963524993Z" level=error msg="Failed to destroy network for sandbox \"1ecbd8e0b43c256101098d4d5c467284aac2e14b4e2908c500aaf1a59a3ffebc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:38.965238 containerd[1507]: time="2025-07-09T09:55:38.965161775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5fkc,Uid:17e20abd-c58c-45cb-960e-cc4c34878a0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecbd8e0b43c256101098d4d5c467284aac2e14b4e2908c500aaf1a59a3ffebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:38.965384 systemd[1]: run-netns-cni\x2df86f7498\x2d782d\x2d430a\x2d0e73\x2dea829e68544e.mount: Deactivated successfully. Jul 9 09:55:38.965540 kubelet[1840]: E0709 09:55:38.965368 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecbd8e0b43c256101098d4d5c467284aac2e14b4e2908c500aaf1a59a3ffebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:38.965540 kubelet[1840]: E0709 09:55:38.965448 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecbd8e0b43c256101098d4d5c467284aac2e14b4e2908c500aaf1a59a3ffebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:55:38.965540 kubelet[1840]: E0709 09:55:38.965468 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecbd8e0b43c256101098d4d5c467284aac2e14b4e2908c500aaf1a59a3ffebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5fkc" Jul 9 09:55:38.965659 kubelet[1840]: E0709 09:55:38.965515 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s5fkc_calico-system(17e20abd-c58c-45cb-960e-cc4c34878a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s5fkc_calico-system(17e20abd-c58c-45cb-960e-cc4c34878a0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ecbd8e0b43c256101098d4d5c467284aac2e14b4e2908c500aaf1a59a3ffebc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5fkc" podUID="17e20abd-c58c-45cb-960e-cc4c34878a0d" Jul 9 09:55:39.737229 kubelet[1840]: E0709 09:55:39.737196 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:39.775985 kubelet[1840]: E0709 09:55:39.775940 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:40.776095 kubelet[1840]: E0709 09:55:40.776055 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:41.513377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689601922.mount: Deactivated successfully. Jul 9 09:55:41.733285 containerd[1507]: time="2025-07-09T09:55:41.733074875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:41.734079 containerd[1507]: time="2025-07-09T09:55:41.733923746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 9 09:55:41.734866 containerd[1507]: time="2025-07-09T09:55:41.734828297Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:41.736586 containerd[1507]: time="2025-07-09T09:55:41.736535079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:41.736982 containerd[1507]: time="2025-07-09T09:55:41.736957474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 17.754845354s" Jul 9 09:55:41.737027 containerd[1507]: time="2025-07-09T09:55:41.736989554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 9 09:55:41.746533 containerd[1507]: time="2025-07-09T09:55:41.746491333Z" level=info msg="CreateContainer within sandbox \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 09:55:41.753823 containerd[1507]: time="2025-07-09T09:55:41.753777016Z" level=info msg="Container bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:55:41.762653 containerd[1507]: time="2025-07-09T09:55:41.762603603Z" level=info msg="CreateContainer within sandbox \"76d33ee012deb334923d66022fd10e5e3c50f709f53d2bfdbd92ef728e3925f4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\"" Jul 9 09:55:41.763200 containerd[1507]: time="2025-07-09T09:55:41.763160597Z" level=info msg="StartContainer for \"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\"" Jul 9 09:55:41.764760 containerd[1507]: time="2025-07-09T09:55:41.764590702Z" level=info msg="connecting to shim bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271" address="unix:///run/containerd/s/95cb75aeb0f0a264f08feafffe3886a633f4f583b971e8530e228dadef6fd79b" protocol=ttrpc version=3 Jul 9 09:55:41.776209 kubelet[1840]: E0709 09:55:41.776166 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:41.781765 systemd[1]: Started cri-containerd-bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271.scope - libcontainer container bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271. Jul 9 09:55:41.819494 containerd[1507]: time="2025-07-09T09:55:41.819439083Z" level=info msg="StartContainer for \"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\" returns successfully" Jul 9 09:55:41.907419 containerd[1507]: time="2025-07-09T09:55:41.907368915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8xtsz,Uid:005be67c-c5b2-491a-8b2a-f1b0dfe3a532,Namespace:default,Attempt:0,}" Jul 9 09:55:41.958127 containerd[1507]: time="2025-07-09T09:55:41.958046420Z" level=error msg="Failed to destroy network for sandbox \"607d08d3ae153c2cdd5aa48eb275cf4dd45e1309ad078bd655adece8f2a7f542\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:41.959214 containerd[1507]: time="2025-07-09T09:55:41.959145688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8xtsz,Uid:005be67c-c5b2-491a-8b2a-f1b0dfe3a532,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"607d08d3ae153c2cdd5aa48eb275cf4dd45e1309ad078bd655adece8f2a7f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:41.959446 kubelet[1840]: E0709 09:55:41.959409 1840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607d08d3ae153c2cdd5aa48eb275cf4dd45e1309ad078bd655adece8f2a7f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:55:41.959497 kubelet[1840]: E0709 09:55:41.959469 1840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607d08d3ae153c2cdd5aa48eb275cf4dd45e1309ad078bd655adece8f2a7f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-8xtsz" Jul 9 09:55:41.959521 kubelet[1840]: E0709 09:55:41.959493 1840 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607d08d3ae153c2cdd5aa48eb275cf4dd45e1309ad078bd655adece8f2a7f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-8xtsz" Jul 9 09:55:41.959565 kubelet[1840]: E0709 09:55:41.959536 1840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-8xtsz_default(005be67c-c5b2-491a-8b2a-f1b0dfe3a532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-8xtsz_default(005be67c-c5b2-491a-8b2a-f1b0dfe3a532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607d08d3ae153c2cdd5aa48eb275cf4dd45e1309ad078bd655adece8f2a7f542\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-8xtsz" podUID="005be67c-c5b2-491a-8b2a-f1b0dfe3a532" Jul 9 09:55:42.013598 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 09:55:42.014068 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 09:55:42.036383 kubelet[1840]: I0709 09:55:42.036250 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ppwqj" podStartSLOduration=3.915234444 podStartE2EDuration="1m3.036232884s" podCreationTimestamp="2025-07-09 09:54:39 +0000 UTC" firstStartedPulling="2025-07-09 09:54:42.616538748 +0000 UTC m=+3.622239544" lastFinishedPulling="2025-07-09 09:55:41.737537228 +0000 UTC m=+62.743237984" observedRunningTime="2025-07-09 09:55:42.035809568 +0000 UTC m=+63.041510364" watchObservedRunningTime="2025-07-09 09:55:42.036232884 +0000 UTC m=+63.041933680" Jul 9 09:55:42.094213 containerd[1507]: time="2025-07-09T09:55:42.094123048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\" id:\"d3ee534a90aa78b15ea25da2116c93709c1b892cb5141ee8d4dbbf6754c4c681\" pid:2941 exit_status:1 exited_at:{seconds:1752054942 nanos:93697172}" Jul 9 09:55:42.143052 kubelet[1840]: I0709 09:55:42.142955 1840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-backend-key-pair\") pod \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\" (UID: \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\") " Jul 9 09:55:42.143052 kubelet[1840]: I0709 09:55:42.143005 1840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-ca-bundle\") pod \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\" (UID: \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\") " Jul 9 09:55:42.143052 kubelet[1840]: I0709 09:55:42.143030 1840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz49t\" (UniqueName: \"kubernetes.io/projected/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-kube-api-access-wz49t\") pod \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\" (UID: \"30c6cf0f-2eb1-4f16-a311-6127f4214bc0\") " Jul 9 09:55:42.143680 kubelet[1840]: I0709 09:55:42.143630 1840 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "30c6cf0f-2eb1-4f16-a311-6127f4214bc0" (UID: "30c6cf0f-2eb1-4f16-a311-6127f4214bc0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 09:55:42.146138 kubelet[1840]: I0709 09:55:42.146110 1840 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-kube-api-access-wz49t" (OuterVolumeSpecName: "kube-api-access-wz49t") pod "30c6cf0f-2eb1-4f16-a311-6127f4214bc0" (UID: "30c6cf0f-2eb1-4f16-a311-6127f4214bc0"). InnerVolumeSpecName "kube-api-access-wz49t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 09:55:42.146386 kubelet[1840]: I0709 09:55:42.146350 1840 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "30c6cf0f-2eb1-4f16-a311-6127f4214bc0" (UID: "30c6cf0f-2eb1-4f16-a311-6127f4214bc0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 09:55:42.244082 kubelet[1840]: I0709 09:55:42.244028 1840 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wz49t\" (UniqueName: \"kubernetes.io/projected/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-kube-api-access-wz49t\") on node \"10.0.0.66\" DevicePath \"\"" Jul 9 09:55:42.244082 kubelet[1840]: I0709 09:55:42.244067 1840 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-backend-key-pair\") on node \"10.0.0.66\" DevicePath \"\"" Jul 9 09:55:42.244082 kubelet[1840]: I0709 09:55:42.244078 1840 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c6cf0f-2eb1-4f16-a311-6127f4214bc0-whisker-ca-bundle\") on node \"10.0.0.66\" DevicePath \"\"" Jul 9 09:55:42.514207 systemd[1]: var-lib-kubelet-pods-30c6cf0f\x2d2eb1\x2d4f16\x2da311\x2d6127f4214bc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwz49t.mount: Deactivated successfully. Jul 9 09:55:42.514301 systemd[1]: var-lib-kubelet-pods-30c6cf0f\x2d2eb1\x2d4f16\x2da311\x2d6127f4214bc0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 09:55:42.777547 kubelet[1840]: E0709 09:55:42.777300 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:43.022299 systemd[1]: Removed slice kubepods-besteffort-pod30c6cf0f_2eb1_4f16_a311_6127f4214bc0.slice - libcontainer container kubepods-besteffort-pod30c6cf0f_2eb1_4f16_a311_6127f4214bc0.slice. Jul 9 09:55:43.078309 systemd[1]: Created slice kubepods-besteffort-podf1667b38_4ba2_4901_be3f_b44f3c744cd9.slice - libcontainer container kubepods-besteffort-podf1667b38_4ba2_4901_be3f_b44f3c744cd9.slice. Jul 9 09:55:43.095833 containerd[1507]: time="2025-07-09T09:55:43.095770644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\" id:\"c071cb2c17f7bbae5e0f6fe857ba86c69b6ca702232557c99e0253fe1067250e\" pid:2989 exit_status:1 exited_at:{seconds:1752054943 nanos:94900853}" Jul 9 09:55:43.149654 kubelet[1840]: I0709 09:55:43.149597 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1667b38-4ba2-4901-be3f-b44f3c744cd9-whisker-backend-key-pair\") pod \"whisker-79847b74bf-2kn6c\" (UID: \"f1667b38-4ba2-4901-be3f-b44f3c744cd9\") " pod="calico-system/whisker-79847b74bf-2kn6c" Jul 9 09:55:43.149805 kubelet[1840]: I0709 09:55:43.149689 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1667b38-4ba2-4901-be3f-b44f3c744cd9-whisker-ca-bundle\") pod \"whisker-79847b74bf-2kn6c\" (UID: \"f1667b38-4ba2-4901-be3f-b44f3c744cd9\") " pod="calico-system/whisker-79847b74bf-2kn6c" Jul 9 09:55:43.149805 kubelet[1840]: I0709 09:55:43.149709 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9nbd\" (UniqueName: \"kubernetes.io/projected/f1667b38-4ba2-4901-be3f-b44f3c744cd9-kube-api-access-s9nbd\") pod \"whisker-79847b74bf-2kn6c\" (UID: \"f1667b38-4ba2-4901-be3f-b44f3c744cd9\") " pod="calico-system/whisker-79847b74bf-2kn6c" Jul 9 09:55:43.382510 containerd[1507]: time="2025-07-09T09:55:43.382335768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79847b74bf-2kn6c,Uid:f1667b38-4ba2-4901-be3f-b44f3c744cd9,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:43.548074 systemd-networkd[1424]: cali15aeca227c7: Link UP Jul 9 09:55:43.548274 systemd-networkd[1424]: cali15aeca227c7: Gained carrier Jul 9 09:55:43.560398 containerd[1507]: 2025-07-09 09:55:43.410 [INFO][3121] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 09:55:43.560398 containerd[1507]: 2025-07-09 09:55:43.434 [INFO][3121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0 whisker-79847b74bf- calico-system f1667b38-4ba2-4901-be3f-b44f3c744cd9 1126 0 2025-07-09 09:55:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79847b74bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 10.0.0.66 whisker-79847b74bf-2kn6c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali15aeca227c7 [] [] }} ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-" Jul 9 09:55:43.560398 containerd[1507]: 2025-07-09 09:55:43.435 [INFO][3121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.560398 containerd[1507]: 2025-07-09 09:55:43.506 [INFO][3145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" HandleID="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Workload="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.506 [INFO][3145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" HandleID="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Workload="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1860), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.66", "pod":"whisker-79847b74bf-2kn6c", "timestamp":"2025-07-09 09:55:43.5066268 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.506 [INFO][3145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.506 [INFO][3145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.507 [INFO][3145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.517 [INFO][3145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" host="10.0.0.66" Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.521 [INFO][3145] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.525 [INFO][3145] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.527 [INFO][3145] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.529 [INFO][3145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:43.560633 containerd[1507]: 2025-07-09 09:55:43.529 [INFO][3145] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" host="10.0.0.66" Jul 9 09:55:43.560844 containerd[1507]: 2025-07-09 09:55:43.530 [INFO][3145] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174 Jul 9 09:55:43.560844 containerd[1507]: 2025-07-09 09:55:43.533 [INFO][3145] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" host="10.0.0.66" Jul 9 09:55:43.560844 containerd[1507]: 2025-07-09 09:55:43.538 [INFO][3145] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.1/26] block=192.168.123.0/26 handle="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" host="10.0.0.66" Jul 9 09:55:43.560844 containerd[1507]: 2025-07-09 09:55:43.538 [INFO][3145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.1/26] handle="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" host="10.0.0.66" Jul 9 09:55:43.560844 containerd[1507]: 2025-07-09 09:55:43.538 [INFO][3145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:43.560844 containerd[1507]: 2025-07-09 09:55:43.538 [INFO][3145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.1/26] IPv6=[] ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" HandleID="k8s-pod-network.3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Workload="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.560962 containerd[1507]: 2025-07-09 09:55:43.540 [INFO][3121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0", GenerateName:"whisker-79847b74bf-", Namespace:"calico-system", SelfLink:"", UID:"f1667b38-4ba2-4901-be3f-b44f3c744cd9", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79847b74bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"whisker-79847b74bf-2kn6c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15aeca227c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:43.560962 containerd[1507]: 2025-07-09 09:55:43.540 [INFO][3121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.1/32] ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.561058 containerd[1507]: 2025-07-09 09:55:43.540 [INFO][3121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15aeca227c7 ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.561058 containerd[1507]: 2025-07-09 09:55:43.550 [INFO][3121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.561098 containerd[1507]: 2025-07-09 09:55:43.550 [INFO][3121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0", GenerateName:"whisker-79847b74bf-", Namespace:"calico-system", SelfLink:"", UID:"f1667b38-4ba2-4901-be3f-b44f3c744cd9", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79847b74bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174", Pod:"whisker-79847b74bf-2kn6c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15aeca227c7", MAC:"ce:f3:ea:42:27:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:43.561144 containerd[1507]: 2025-07-09 09:55:43.558 [INFO][3121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" Namespace="calico-system" Pod="whisker-79847b74bf-2kn6c" WorkloadEndpoint="10.0.0.66-k8s-whisker--79847b74bf--2kn6c-eth0" Jul 9 09:55:43.589747 containerd[1507]: time="2025-07-09T09:55:43.589692926Z" level=info msg="connecting to shim 3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174" address="unix:///run/containerd/s/fbe974d89a1404b654e8cedba54fe5b2533666572bbf929e2475bfaad44c87db" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:43.624169 systemd[1]: Started cri-containerd-3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174.scope - libcontainer container 3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174. Jul 9 09:55:43.636253 systemd-networkd[1424]: vxlan.calico: Link UP Jul 9 09:55:43.636264 systemd-networkd[1424]: vxlan.calico: Gained carrier Jul 9 09:55:43.639414 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:43.667066 containerd[1507]: time="2025-07-09T09:55:43.667017070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79847b74bf-2kn6c,Uid:f1667b38-4ba2-4901-be3f-b44f3c744cd9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174\"" Jul 9 09:55:43.668752 containerd[1507]: time="2025-07-09T09:55:43.668711893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 09:55:43.777833 kubelet[1840]: E0709 09:55:43.777783 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:43.905780 kubelet[1840]: I0709 09:55:43.905399 1840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c6cf0f-2eb1-4f16-a311-6127f4214bc0" path="/var/lib/kubelet/pods/30c6cf0f-2eb1-4f16-a311-6127f4214bc0/volumes" Jul 9 09:55:44.095793 containerd[1507]: time="2025-07-09T09:55:44.095751629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\" id:\"ad72391d1ab0c9d3e3fe8a0489eec47012ec2d6b0d3274b17ea855cf80cd9a2f\" pid:3294 exit_status:1 exited_at:{seconds:1752054944 nanos:95413073}" Jul 9 09:55:44.778221 kubelet[1840]: E0709 09:55:44.778154 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:45.003720 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Jul 9 09:55:45.323732 systemd-networkd[1424]: cali15aeca227c7: Gained IPv6LL Jul 9 09:55:45.779178 kubelet[1840]: E0709 09:55:45.779125 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:45.905898 containerd[1507]: time="2025-07-09T09:55:45.903745669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-297zz,Uid:e27b3f7c-6fd6-4738-a77c-aa80afb44e50,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:55:46.040057 systemd-networkd[1424]: cali2806c41f8c7: Link UP Jul 9 09:55:46.040469 systemd-networkd[1424]: cali2806c41f8c7: Gained carrier Jul 9 09:55:46.057298 containerd[1507]: 2025-07-09 09:55:45.957 [INFO][3310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0 calico-apiserver-6ff7476b68- calico-apiserver e27b3f7c-6fd6-4738-a77c-aa80afb44e50 995 0 2025-07-09 09:54:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ff7476b68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.66 calico-apiserver-6ff7476b68-297zz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2806c41f8c7 [] [] }} ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-" Jul 9 09:55:46.057298 containerd[1507]: 2025-07-09 09:55:45.957 [INFO][3310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.057298 containerd[1507]: 2025-07-09 09:55:45.987 [INFO][3325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" HandleID="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Workload="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:45.987 [INFO][3325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" HandleID="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Workload="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.66", "pod":"calico-apiserver-6ff7476b68-297zz", "timestamp":"2025-07-09 09:55:45.987383588 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:45.987 [INFO][3325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:45.987 [INFO][3325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:45.987 [INFO][3325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:45.999 [INFO][3325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" host="10.0.0.66" Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:46.005 [INFO][3325] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:46.011 [INFO][3325] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:46.014 [INFO][3325] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:46.020 [INFO][3325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:46.057508 containerd[1507]: 2025-07-09 09:55:46.020 [INFO][3325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" host="10.0.0.66" Jul 9 09:55:46.057750 containerd[1507]: 2025-07-09 09:55:46.022 [INFO][3325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52 Jul 9 09:55:46.057750 containerd[1507]: 2025-07-09 09:55:46.028 [INFO][3325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" host="10.0.0.66" Jul 9 09:55:46.057750 containerd[1507]: 2025-07-09 09:55:46.035 [INFO][3325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.2/26] block=192.168.123.0/26 handle="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" host="10.0.0.66" Jul 9 09:55:46.057750 containerd[1507]: 2025-07-09 09:55:46.035 [INFO][3325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.2/26] handle="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" host="10.0.0.66" Jul 9 09:55:46.057750 containerd[1507]: 2025-07-09 09:55:46.035 [INFO][3325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:46.057750 containerd[1507]: 2025-07-09 09:55:46.035 [INFO][3325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.2/26] IPv6=[] ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" HandleID="k8s-pod-network.49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Workload="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.058022 containerd[1507]: 2025-07-09 09:55:46.037 [INFO][3310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0", GenerateName:"calico-apiserver-6ff7476b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"e27b3f7c-6fd6-4738-a77c-aa80afb44e50", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff7476b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"calico-apiserver-6ff7476b68-297zz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2806c41f8c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:46.058085 containerd[1507]: 2025-07-09 09:55:46.038 [INFO][3310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.2/32] ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.058085 containerd[1507]: 2025-07-09 09:55:46.038 [INFO][3310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2806c41f8c7 ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.058085 containerd[1507]: 2025-07-09 09:55:46.040 [INFO][3310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.058147 containerd[1507]: 2025-07-09 09:55:46.040 [INFO][3310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0", GenerateName:"calico-apiserver-6ff7476b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"e27b3f7c-6fd6-4738-a77c-aa80afb44e50", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff7476b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52", Pod:"calico-apiserver-6ff7476b68-297zz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2806c41f8c7", MAC:"0a:5e:4b:17:9b:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:46.058191 containerd[1507]: 2025-07-09 09:55:46.055 [INFO][3310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-297zz" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--297zz-eth0" Jul 9 09:55:46.083325 containerd[1507]: time="2025-07-09T09:55:46.083119128Z" level=info msg="connecting to shim 49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52" address="unix:///run/containerd/s/0bea72f80b0543b4587293dfabb895074251b283de3cf8f6194506d4e816a012" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:46.107756 systemd[1]: Started cri-containerd-49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52.scope - libcontainer container 49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52. Jul 9 09:55:46.120280 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:46.143362 containerd[1507]: time="2025-07-09T09:55:46.143325044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-297zz,Uid:e27b3f7c-6fd6-4738-a77c-aa80afb44e50,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52\"" Jul 9 09:55:46.780264 kubelet[1840]: E0709 09:55:46.780189 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:46.903451 containerd[1507]: time="2025-07-09T09:55:46.903384801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-sl8nm,Uid:6cdbffe2-a7ef-4b73-989e-6be39e6466bb,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:47.022437 systemd-networkd[1424]: calia0791a11866: Link UP Jul 9 09:55:47.023063 systemd-networkd[1424]: calia0791a11866: Gained carrier Jul 9 09:55:47.037548 containerd[1507]: 2025-07-09 09:55:46.954 [INFO][3388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0 goldmane-768f4c5c69- calico-system 6cdbffe2-a7ef-4b73-989e-6be39e6466bb 996 0 2025-07-09 09:54:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 10.0.0.66 goldmane-768f4c5c69-sl8nm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia0791a11866 [] [] }} ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-" Jul 9 09:55:47.037548 containerd[1507]: 2025-07-09 09:55:46.954 [INFO][3388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.037548 containerd[1507]: 2025-07-09 09:55:46.975 [INFO][3402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" HandleID="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Workload="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.976 [INFO][3402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" HandleID="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Workload="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000343730), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.66", "pod":"goldmane-768f4c5c69-sl8nm", "timestamp":"2025-07-09 09:55:46.975552925 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.976 [INFO][3402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.976 [INFO][3402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.976 [INFO][3402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.985 [INFO][3402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" host="10.0.0.66" Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.990 [INFO][3402] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:46.994 [INFO][3402] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:47.002 [INFO][3402] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:47.004 [INFO][3402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:47.038012 containerd[1507]: 2025-07-09 09:55:47.004 [INFO][3402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" host="10.0.0.66" Jul 9 09:55:47.038208 containerd[1507]: 2025-07-09 09:55:47.005 [INFO][3402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2 Jul 9 09:55:47.038208 containerd[1507]: 2025-07-09 09:55:47.012 [INFO][3402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" host="10.0.0.66" Jul 9 09:55:47.038208 containerd[1507]: 2025-07-09 09:55:47.017 [INFO][3402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.3/26] block=192.168.123.0/26 handle="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" host="10.0.0.66" Jul 9 09:55:47.038208 containerd[1507]: 2025-07-09 09:55:47.018 [INFO][3402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.3/26] handle="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" host="10.0.0.66" Jul 9 09:55:47.038208 containerd[1507]: 2025-07-09 09:55:47.018 [INFO][3402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:47.038208 containerd[1507]: 2025-07-09 09:55:47.018 [INFO][3402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.3/26] IPv6=[] ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" HandleID="k8s-pod-network.17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Workload="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.038311 containerd[1507]: 2025-07-09 09:55:47.019 [INFO][3388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"6cdbffe2-a7ef-4b73-989e-6be39e6466bb", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"goldmane-768f4c5c69-sl8nm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia0791a11866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:47.038311 containerd[1507]: 2025-07-09 09:55:47.019 [INFO][3388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.3/32] ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.038372 containerd[1507]: 2025-07-09 09:55:47.019 [INFO][3388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0791a11866 ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.038372 containerd[1507]: 2025-07-09 09:55:47.023 [INFO][3388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.038414 containerd[1507]: 2025-07-09 09:55:47.023 [INFO][3388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"6cdbffe2-a7ef-4b73-989e-6be39e6466bb", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2", Pod:"goldmane-768f4c5c69-sl8nm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia0791a11866", MAC:"3e:24:bb:67:72:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:47.038551 containerd[1507]: 2025-07-09 09:55:47.035 [INFO][3388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" Namespace="calico-system" Pod="goldmane-768f4c5c69-sl8nm" WorkloadEndpoint="10.0.0.66-k8s-goldmane--768f4c5c69--sl8nm-eth0" Jul 9 09:55:47.057083 containerd[1507]: time="2025-07-09T09:55:47.057039412Z" level=info msg="connecting to shim 17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2" address="unix:///run/containerd/s/aaa4ac42cc4675cc9831363924eba5a7c20e4884dbe4a1523819187d3ac0e076" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:47.082733 systemd[1]: Started cri-containerd-17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2.scope - libcontainer container 17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2. Jul 9 09:55:47.092638 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:47.111438 containerd[1507]: time="2025-07-09T09:55:47.111333354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-sl8nm,Uid:6cdbffe2-a7ef-4b73-989e-6be39e6466bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2\"" Jul 9 09:55:47.179776 systemd-networkd[1424]: cali2806c41f8c7: Gained IPv6LL Jul 9 09:55:47.780816 kubelet[1840]: E0709 09:55:47.780753 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:48.395884 systemd-networkd[1424]: calia0791a11866: Gained IPv6LL Jul 9 09:55:48.781115 kubelet[1840]: E0709 09:55:48.781070 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:48.903840 containerd[1507]: time="2025-07-09T09:55:48.903661591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb67b5544-mqk5b,Uid:fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:48.913912 containerd[1507]: time="2025-07-09T09:55:48.913638342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2tf5,Uid:5b03f46f-c50c-4969-a08e-e3da216bd85b,Namespace:kube-system,Attempt:0,}" Jul 9 09:55:49.013443 systemd-networkd[1424]: cali1f826a92972: Link UP Jul 9 09:55:49.014169 systemd-networkd[1424]: cali1f826a92972: Gained carrier Jul 9 09:55:49.029391 containerd[1507]: 2025-07-09 09:55:48.942 [INFO][3470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0 calico-kube-controllers-6bb67b5544- calico-system fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614 994 0 2025-07-09 09:54:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb67b5544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.66 calico-kube-controllers-6bb67b5544-mqk5b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f826a92972 [] [] }} ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-" Jul 9 09:55:49.029391 containerd[1507]: 2025-07-09 09:55:48.942 [INFO][3470] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.029391 containerd[1507]: 2025-07-09 09:55:48.970 [INFO][3500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" HandleID="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Workload="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.970 [INFO][3500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" HandleID="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Workload="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051aa40), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.66", "pod":"calico-kube-controllers-6bb67b5544-mqk5b", "timestamp":"2025-07-09 09:55:48.970065474 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.970 [INFO][3500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.970 [INFO][3500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.970 [INFO][3500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.979 [INFO][3500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" host="10.0.0.66" Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.984 [INFO][3500] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.989 [INFO][3500] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.991 [INFO][3500] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.993 [INFO][3500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:49.029717 containerd[1507]: 2025-07-09 09:55:48.993 [INFO][3500] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" host="10.0.0.66" Jul 9 09:55:49.029931 containerd[1507]: 2025-07-09 09:55:48.995 [INFO][3500] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65 Jul 9 09:55:49.029931 containerd[1507]: 2025-07-09 09:55:48.998 [INFO][3500] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" host="10.0.0.66" Jul 9 09:55:49.029931 containerd[1507]: 2025-07-09 09:55:49.008 [INFO][3500] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.4/26] block=192.168.123.0/26 handle="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" host="10.0.0.66" Jul 9 09:55:49.029931 containerd[1507]: 2025-07-09 09:55:49.008 [INFO][3500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.4/26] handle="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" host="10.0.0.66" Jul 9 09:55:49.029931 containerd[1507]: 2025-07-09 09:55:49.008 [INFO][3500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:49.029931 containerd[1507]: 2025-07-09 09:55:49.008 [INFO][3500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.4/26] IPv6=[] ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" HandleID="k8s-pod-network.cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Workload="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.030037 containerd[1507]: 2025-07-09 09:55:49.009 [INFO][3470] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0", GenerateName:"calico-kube-controllers-6bb67b5544-", Namespace:"calico-system", SelfLink:"", UID:"fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb67b5544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"calico-kube-controllers-6bb67b5544-mqk5b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f826a92972", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:49.030084 containerd[1507]: 2025-07-09 09:55:49.010 [INFO][3470] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.4/32] ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.030084 containerd[1507]: 2025-07-09 09:55:49.010 [INFO][3470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f826a92972 ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.030084 containerd[1507]: 2025-07-09 09:55:49.014 [INFO][3470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.030142 containerd[1507]: 2025-07-09 09:55:49.015 [INFO][3470] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0", GenerateName:"calico-kube-controllers-6bb67b5544-", Namespace:"calico-system", SelfLink:"", UID:"fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb67b5544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65", Pod:"calico-kube-controllers-6bb67b5544-mqk5b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f826a92972", MAC:"82:3b:2a:68:52:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:49.030186 containerd[1507]: 2025-07-09 09:55:49.027 [INFO][3470] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" Namespace="calico-system" Pod="calico-kube-controllers-6bb67b5544-mqk5b" WorkloadEndpoint="10.0.0.66-k8s-calico--kube--controllers--6bb67b5544--mqk5b-eth0" Jul 9 09:55:49.060380 containerd[1507]: time="2025-07-09T09:55:49.060096035Z" level=info msg="connecting to shim cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65" address="unix:///run/containerd/s/14adce599da7fb4cf994598e49781deaa92ca77fe523885bac3144932ec76007" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:49.099854 systemd[1]: Started cri-containerd-cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65.scope - libcontainer container cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65. Jul 9 09:55:49.118333 systemd-networkd[1424]: calid138aac548f: Link UP Jul 9 09:55:49.118676 systemd-networkd[1424]: calid138aac548f: Gained carrier Jul 9 09:55:49.122617 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:49.129564 containerd[1507]: 2025-07-09 09:55:48.955 [INFO][3485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0 coredns-668d6bf9bc- kube-system 5b03f46f-c50c-4969-a08e-e3da216bd85b 987 0 2025-07-09 09:54:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.66 coredns-668d6bf9bc-g2tf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid138aac548f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-" Jul 9 09:55:49.129564 containerd[1507]: 2025-07-09 09:55:48.955 [INFO][3485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.129564 containerd[1507]: 2025-07-09 09:55:48.982 [INFO][3508] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" HandleID="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Workload="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:48.982 [INFO][3508] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" HandleID="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Workload="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3810), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.66", "pod":"coredns-668d6bf9bc-g2tf5", "timestamp":"2025-07-09 09:55:48.98277768 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:48.982 [INFO][3508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.008 [INFO][3508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.008 [INFO][3508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.082 [INFO][3508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" host="10.0.0.66" Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.088 [INFO][3508] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.093 [INFO][3508] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.096 [INFO][3508] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.099 [INFO][3508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:49.129753 containerd[1507]: 2025-07-09 09:55:49.099 [INFO][3508] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" host="10.0.0.66" Jul 9 09:55:49.129961 containerd[1507]: 2025-07-09 09:55:49.102 [INFO][3508] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9 Jul 9 09:55:49.129961 containerd[1507]: 2025-07-09 09:55:49.106 [INFO][3508] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" host="10.0.0.66" Jul 9 09:55:49.129961 containerd[1507]: 2025-07-09 09:55:49.112 [INFO][3508] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.5/26] block=192.168.123.0/26 handle="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" host="10.0.0.66" Jul 9 09:55:49.129961 containerd[1507]: 2025-07-09 09:55:49.112 [INFO][3508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.5/26] handle="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" host="10.0.0.66" Jul 9 09:55:49.129961 containerd[1507]: 2025-07-09 09:55:49.112 [INFO][3508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:49.129961 containerd[1507]: 2025-07-09 09:55:49.112 [INFO][3508] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.5/26] IPv6=[] ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" HandleID="k8s-pod-network.6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Workload="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.130071 containerd[1507]: 2025-07-09 09:55:49.115 [INFO][3485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b03f46f-c50c-4969-a08e-e3da216bd85b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"coredns-668d6bf9bc-g2tf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid138aac548f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:49.130119 containerd[1507]: 2025-07-09 09:55:49.115 [INFO][3485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.5/32] ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.130119 containerd[1507]: 2025-07-09 09:55:49.115 [INFO][3485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid138aac548f ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.130119 containerd[1507]: 2025-07-09 09:55:49.117 [INFO][3485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.130181 containerd[1507]: 2025-07-09 09:55:49.117 [INFO][3485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b03f46f-c50c-4969-a08e-e3da216bd85b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9", Pod:"coredns-668d6bf9bc-g2tf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid138aac548f", MAC:"36:2c:37:49:75:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:49.130181 containerd[1507]: 2025-07-09 09:55:49.127 [INFO][3485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2tf5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--g2tf5-eth0" Jul 9 09:55:49.157080 containerd[1507]: time="2025-07-09T09:55:49.157032139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb67b5544-mqk5b,Uid:fcaa3a38-1ce8-43db-b5c0-9aa9a3dc3614,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65\"" Jul 9 09:55:49.158532 containerd[1507]: time="2025-07-09T09:55:49.157982691Z" level=info msg="connecting to shim 6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9" address="unix:///run/containerd/s/34a2fd0663e592c6cc866b1d36a97d7baecfabd2e51fa2231f3afc57514f70d7" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:49.181725 systemd[1]: Started cri-containerd-6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9.scope - libcontainer container 6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9. Jul 9 09:55:49.192646 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:49.214635 containerd[1507]: time="2025-07-09T09:55:49.214569392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2tf5,Uid:5b03f46f-c50c-4969-a08e-e3da216bd85b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9\"" Jul 9 09:55:49.781973 kubelet[1840]: E0709 09:55:49.781924 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:49.903709 containerd[1507]: time="2025-07-09T09:55:49.903662032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqnt5,Uid:5d42a16a-2886-4936-a84a-3a3065394fcf,Namespace:kube-system,Attempt:0,}" Jul 9 09:55:50.005058 systemd-networkd[1424]: cali13cb79d340a: Link UP Jul 9 09:55:50.005422 systemd-networkd[1424]: cali13cb79d340a: Gained carrier Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.941 [INFO][3628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0 coredns-668d6bf9bc- kube-system 5d42a16a-2886-4936-a84a-3a3065394fcf 991 0 2025-07-09 09:54:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.66 coredns-668d6bf9bc-hqnt5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali13cb79d340a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.941 [INFO][3628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.965 [INFO][3643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" HandleID="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Workload="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.965 [INFO][3643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" HandleID="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Workload="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c2f0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.66", "pod":"coredns-668d6bf9bc-hqnt5", "timestamp":"2025-07-09 09:55:49.965397808 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.965 [INFO][3643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.965 [INFO][3643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.965 [INFO][3643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.974 [INFO][3643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.981 [INFO][3643] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.986 [INFO][3643] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.988 [INFO][3643] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.990 [INFO][3643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.990 [INFO][3643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.992 [INFO][3643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8 Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:49.995 [INFO][3643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:50.001 [INFO][3643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.6/26] block=192.168.123.0/26 handle="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:50.001 [INFO][3643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.6/26] handle="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" host="10.0.0.66" Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:50.001 [INFO][3643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:50.016611 containerd[1507]: 2025-07-09 09:55:50.001 [INFO][3643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.6/26] IPv6=[] ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" HandleID="k8s-pod-network.c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Workload="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.017316 containerd[1507]: 2025-07-09 09:55:50.003 [INFO][3628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d42a16a-2886-4936-a84a-3a3065394fcf", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"coredns-668d6bf9bc-hqnt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13cb79d340a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:50.017316 containerd[1507]: 2025-07-09 09:55:50.003 [INFO][3628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.6/32] ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.017316 containerd[1507]: 2025-07-09 09:55:50.003 [INFO][3628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13cb79d340a ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.017316 containerd[1507]: 2025-07-09 09:55:50.004 [INFO][3628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.017316 containerd[1507]: 2025-07-09 09:55:50.005 [INFO][3628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d42a16a-2886-4936-a84a-3a3065394fcf", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8", Pod:"coredns-668d6bf9bc-hqnt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13cb79d340a", MAC:"0e:74:a3:68:3e:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:50.017316 containerd[1507]: 2025-07-09 09:55:50.014 [INFO][3628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqnt5" WorkloadEndpoint="10.0.0.66-k8s-coredns--668d6bf9bc--hqnt5-eth0" Jul 9 09:55:50.034699 containerd[1507]: time="2025-07-09T09:55:50.034423924Z" level=info msg="connecting to shim c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8" address="unix:///run/containerd/s/b0daec138e64405423522172c1e0a8620a0362c62dbfc5aea5c952f54c155998" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:50.058747 systemd[1]: Started cri-containerd-c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8.scope - libcontainer container c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8. Jul 9 09:55:50.068606 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:50.087911 containerd[1507]: time="2025-07-09T09:55:50.087873941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqnt5,Uid:5d42a16a-2886-4936-a84a-3a3065394fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8\"" Jul 9 09:55:50.507818 systemd-networkd[1424]: cali1f826a92972: Gained IPv6LL Jul 9 09:55:50.782615 kubelet[1840]: E0709 09:55:50.782483 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:50.827773 systemd-networkd[1424]: calid138aac548f: Gained IPv6LL Jul 9 09:55:51.723712 systemd-networkd[1424]: cali13cb79d340a: Gained IPv6LL Jul 9 09:55:51.783679 kubelet[1840]: E0709 09:55:51.783623 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:51.904375 containerd[1507]: time="2025-07-09T09:55:51.904073985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-8zv5b,Uid:cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:55:52.037900 systemd-networkd[1424]: cali6a64c5fc819: Link UP Jul 9 09:55:52.040280 systemd-networkd[1424]: cali6a64c5fc819: Gained carrier Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.951 [INFO][3706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0 calico-apiserver-6ff7476b68- calico-apiserver cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df 998 0 2025-07-09 09:54:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ff7476b68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.66 calico-apiserver-6ff7476b68-8zv5b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a64c5fc819 [] [] }} ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.951 [INFO][3706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.983 [INFO][3721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" HandleID="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Workload="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.983 [INFO][3721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" HandleID="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Workload="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.66", "pod":"calico-apiserver-6ff7476b68-8zv5b", "timestamp":"2025-07-09 09:55:51.983087473 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.983 [INFO][3721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.983 [INFO][3721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.983 [INFO][3721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:51.996 [INFO][3721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.003 [INFO][3721] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.009 [INFO][3721] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.012 [INFO][3721] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.016 [INFO][3721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.016 [INFO][3721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.019 [INFO][3721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81 Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.023 [INFO][3721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.031 [INFO][3721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.7/26] block=192.168.123.0/26 handle="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.031 [INFO][3721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.7/26] handle="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" host="10.0.0.66" Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.031 [INFO][3721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:52.053705 containerd[1507]: 2025-07-09 09:55:52.031 [INFO][3721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.7/26] IPv6=[] ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" HandleID="k8s-pod-network.b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Workload="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.054482 containerd[1507]: 2025-07-09 09:55:52.033 [INFO][3706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0", GenerateName:"calico-apiserver-6ff7476b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff7476b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"calico-apiserver-6ff7476b68-8zv5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a64c5fc819", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:52.054482 containerd[1507]: 2025-07-09 09:55:52.033 [INFO][3706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.7/32] ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.054482 containerd[1507]: 2025-07-09 09:55:52.033 [INFO][3706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a64c5fc819 ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.054482 containerd[1507]: 2025-07-09 09:55:52.034 [INFO][3706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.054482 containerd[1507]: 2025-07-09 09:55:52.038 [INFO][3706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0", GenerateName:"calico-apiserver-6ff7476b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ff7476b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81", Pod:"calico-apiserver-6ff7476b68-8zv5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a64c5fc819", MAC:"46:43:50:bb:10:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:52.054482 containerd[1507]: 2025-07-09 09:55:52.050 [INFO][3706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" Namespace="calico-apiserver" Pod="calico-apiserver-6ff7476b68-8zv5b" WorkloadEndpoint="10.0.0.66-k8s-calico--apiserver--6ff7476b68--8zv5b-eth0" Jul 9 09:55:52.074215 containerd[1507]: time="2025-07-09T09:55:52.074155468Z" level=info msg="connecting to shim b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81" address="unix:///run/containerd/s/7a5611c6849de8efa2bd3ac02b171b2bcc8a3e9c44f227a702b7a35abe540ee3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:52.096751 systemd[1]: Started cri-containerd-b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81.scope - libcontainer container b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81. Jul 9 09:55:52.107023 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:52.132257 containerd[1507]: time="2025-07-09T09:55:52.132219862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ff7476b68-8zv5b,Uid:cca6a8b1-15c4-430d-b8f9-ad2e9c29c3df,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81\"" Jul 9 09:55:52.784128 kubelet[1840]: E0709 09:55:52.784082 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:53.516173 systemd-networkd[1424]: cali6a64c5fc819: Gained IPv6LL Jul 9 09:55:53.784788 kubelet[1840]: E0709 09:55:53.784646 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:53.904371 containerd[1507]: time="2025-07-09T09:55:53.904305909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5fkc,Uid:17e20abd-c58c-45cb-960e-cc4c34878a0d,Namespace:calico-system,Attempt:0,}" Jul 9 09:55:54.009797 systemd-networkd[1424]: cali8ed341c60e7: Link UP Jul 9 09:55:54.010444 systemd-networkd[1424]: cali8ed341c60e7: Gained carrier Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.944 [INFO][3791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-csi--node--driver--s5fkc-eth0 csi-node-driver- calico-system 17e20abd-c58c-45cb-960e-cc4c34878a0d 799 0 2025-07-09 09:54:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.66 csi-node-driver-s5fkc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ed341c60e7 [] [] }} ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.944 [INFO][3791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.968 [INFO][3807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" HandleID="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Workload="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.968 [INFO][3807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" HandleID="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Workload="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.66", "pod":"csi-node-driver-s5fkc", "timestamp":"2025-07-09 09:55:53.968232222 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.968 [INFO][3807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.968 [INFO][3807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.968 [INFO][3807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.978 [INFO][3807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.983 [INFO][3807] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.987 [INFO][3807] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.989 [INFO][3807] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.992 [INFO][3807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.992 [INFO][3807] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.994 [INFO][3807] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583 Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:53.999 [INFO][3807] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:54.006 [INFO][3807] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.8/26] block=192.168.123.0/26 handle="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:54.006 [INFO][3807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.8/26] handle="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" host="10.0.0.66" Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:54.006 [INFO][3807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:54.024246 containerd[1507]: 2025-07-09 09:55:54.006 [INFO][3807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.8/26] IPv6=[] ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" HandleID="k8s-pod-network.54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Workload="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.025419 containerd[1507]: 2025-07-09 09:55:54.007 [INFO][3791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-csi--node--driver--s5fkc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17e20abd-c58c-45cb-960e-cc4c34878a0d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"csi-node-driver-s5fkc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ed341c60e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:54.025419 containerd[1507]: 2025-07-09 09:55:54.008 [INFO][3791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.8/32] ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.025419 containerd[1507]: 2025-07-09 09:55:54.008 [INFO][3791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ed341c60e7 ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.025419 containerd[1507]: 2025-07-09 09:55:54.010 [INFO][3791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.025419 containerd[1507]: 2025-07-09 09:55:54.011 [INFO][3791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-csi--node--driver--s5fkc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17e20abd-c58c-45cb-960e-cc4c34878a0d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583", Pod:"csi-node-driver-s5fkc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ed341c60e7", MAC:"0e:d9:d4:56:d8:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:54.025419 containerd[1507]: 2025-07-09 09:55:54.020 [INFO][3791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" Namespace="calico-system" Pod="csi-node-driver-s5fkc" WorkloadEndpoint="10.0.0.66-k8s-csi--node--driver--s5fkc-eth0" Jul 9 09:55:54.051917 containerd[1507]: time="2025-07-09T09:55:54.051559542Z" level=info msg="connecting to shim 54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583" address="unix:///run/containerd/s/c7fcbec433d36cdc7dc28c4631f80ae1e783e0cf46884e400ffff03323a42dfd" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:54.085795 systemd[1]: Started cri-containerd-54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583.scope - libcontainer container 54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583. Jul 9 09:55:54.097202 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:54.110160 containerd[1507]: time="2025-07-09T09:55:54.110064587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5fkc,Uid:17e20abd-c58c-45cb-960e-cc4c34878a0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583\"" Jul 9 09:55:54.785288 kubelet[1840]: E0709 09:55:54.785012 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:54.903603 containerd[1507]: time="2025-07-09T09:55:54.903551629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8xtsz,Uid:005be67c-c5b2-491a-8b2a-f1b0dfe3a532,Namespace:default,Attempt:0,}" Jul 9 09:55:55.070965 systemd-networkd[1424]: cali73369be13fb: Link UP Jul 9 09:55:55.072895 systemd-networkd[1424]: cali73369be13fb: Gained carrier Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:54.981 [INFO][3870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0 nginx-deployment-7fcdb87857- default 005be67c-c5b2-491a-8b2a-f1b0dfe3a532 1035 0 2025-07-09 09:55:27 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.66 nginx-deployment-7fcdb87857-8xtsz eth0 default [] [] [kns.default ksa.default.default] cali73369be13fb [] [] }} ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:54.981 [INFO][3870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.010 [INFO][3886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" HandleID="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Workload="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.010 [INFO][3886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" HandleID="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Workload="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.66", "pod":"nginx-deployment-7fcdb87857-8xtsz", "timestamp":"2025-07-09 09:55:55.010208445 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.010 [INFO][3886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.010 [INFO][3886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.010 [INFO][3886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.021 [INFO][3886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.028 [INFO][3886] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.034 [INFO][3886] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.037 [INFO][3886] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.040 [INFO][3886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.040 [INFO][3886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.042 [INFO][3886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91 Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.048 [INFO][3886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.063 [INFO][3886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.9/26] block=192.168.123.0/26 handle="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.063 [INFO][3886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.9/26] handle="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" host="10.0.0.66" Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.063 [INFO][3886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:55:55.091624 containerd[1507]: 2025-07-09 09:55:55.063 [INFO][3886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.9/26] IPv6=[] ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" HandleID="k8s-pod-network.a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Workload="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.092520 containerd[1507]: 2025-07-09 09:55:55.066 [INFO][3870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"005be67c-c5b2-491a-8b2a-f1b0dfe3a532", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-8xtsz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.123.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali73369be13fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:55.092520 containerd[1507]: 2025-07-09 09:55:55.066 [INFO][3870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.9/32] ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.092520 containerd[1507]: 2025-07-09 09:55:55.066 [INFO][3870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73369be13fb ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.092520 containerd[1507]: 2025-07-09 09:55:55.073 [INFO][3870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.092520 containerd[1507]: 2025-07-09 09:55:55.074 [INFO][3870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"005be67c-c5b2-491a-8b2a-f1b0dfe3a532", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91", Pod:"nginx-deployment-7fcdb87857-8xtsz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.123.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali73369be13fb", MAC:"f2:18:69:24:c0:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:55:55.092520 containerd[1507]: 2025-07-09 09:55:55.087 [INFO][3870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" Namespace="default" Pod="nginx-deployment-7fcdb87857-8xtsz" WorkloadEndpoint="10.0.0.66-k8s-nginx--deployment--7fcdb87857--8xtsz-eth0" Jul 9 09:55:55.115327 containerd[1507]: time="2025-07-09T09:55:55.115253805Z" level=info msg="connecting to shim a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91" address="unix:///run/containerd/s/028d8fa3feb88663a615ff6dec1bd89f31a8d9057afb73a29851fa4b8732c23a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:55:55.139786 systemd[1]: Started cri-containerd-a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91.scope - libcontainer container a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91. Jul 9 09:55:55.150618 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:55:55.186765 containerd[1507]: time="2025-07-09T09:55:55.186726193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8xtsz,Uid:005be67c-c5b2-491a-8b2a-f1b0dfe3a532,Namespace:default,Attempt:0,} returns sandbox id \"a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91\"" Jul 9 09:55:55.435733 systemd-networkd[1424]: cali8ed341c60e7: Gained IPv6LL Jul 9 09:55:55.785223 kubelet[1840]: E0709 09:55:55.785177 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:56.523975 systemd-networkd[1424]: cali73369be13fb: Gained IPv6LL Jul 9 09:55:56.621112 containerd[1507]: time="2025-07-09T09:55:56.621051550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:56.622362 containerd[1507]: time="2025-07-09T09:55:56.622277100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 9 09:55:56.623134 containerd[1507]: time="2025-07-09T09:55:56.623100374Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:56.625904 containerd[1507]: time="2025-07-09T09:55:56.625869912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:55:56.626532 containerd[1507]: time="2025-07-09T09:55:56.626501947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 12.957754894s" Jul 9 09:55:56.626587 containerd[1507]: time="2025-07-09T09:55:56.626532587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 9 09:55:56.628666 containerd[1507]: time="2025-07-09T09:55:56.628632690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 09:55:56.630369 containerd[1507]: time="2025-07-09T09:55:56.630329197Z" level=info msg="CreateContainer within sandbox \"3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 09:55:56.636903 containerd[1507]: time="2025-07-09T09:55:56.636865385Z" level=info msg="Container 7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:55:56.643195 containerd[1507]: time="2025-07-09T09:55:56.643150975Z" level=info msg="CreateContainer within sandbox \"3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e\"" Jul 9 09:55:56.643835 containerd[1507]: time="2025-07-09T09:55:56.643799530Z" level=info msg="StartContainer for \"7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e\"" Jul 9 09:55:56.645163 containerd[1507]: time="2025-07-09T09:55:56.645131120Z" level=info msg="connecting to shim 7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e" address="unix:///run/containerd/s/fbe974d89a1404b654e8cedba54fe5b2533666572bbf929e2475bfaad44c87db" protocol=ttrpc version=3 Jul 9 09:55:56.668753 systemd[1]: Started cri-containerd-7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e.scope - libcontainer container 7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e. Jul 9 09:55:56.702783 containerd[1507]: time="2025-07-09T09:55:56.702730025Z" level=info msg="StartContainer for \"7dfb9e706552f0b714864ff8291a5efe2561af2fb4ba9d41490a1f185e6eb72e\" returns successfully" Jul 9 09:55:56.786271 kubelet[1840]: E0709 09:55:56.786150 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:57.786909 kubelet[1840]: E0709 09:55:57.786859 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:58.788034 kubelet[1840]: E0709 09:55:58.787982 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:59.736834 kubelet[1840]: E0709 09:55:59.736780 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:55:59.788636 kubelet[1840]: E0709 09:55:59.788584 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:00.789032 kubelet[1840]: E0709 09:56:00.788976 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:01.789465 kubelet[1840]: E0709 09:56:01.789415 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:02.790383 kubelet[1840]: E0709 09:56:02.790285 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:03.791221 kubelet[1840]: E0709 09:56:03.791168 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:04.361240 containerd[1507]: time="2025-07-09T09:56:04.359850150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:04.361240 containerd[1507]: time="2025-07-09T09:56:04.360304547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 9 09:56:04.361240 containerd[1507]: time="2025-07-09T09:56:04.361028782Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:04.363260 containerd[1507]: time="2025-07-09T09:56:04.362843849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:04.363879 containerd[1507]: time="2025-07-09T09:56:04.363820802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 7.735152112s" Jul 9 09:56:04.363879 containerd[1507]: time="2025-07-09T09:56:04.363854961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 09:56:04.365534 containerd[1507]: time="2025-07-09T09:56:04.365462110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 09:56:04.366412 containerd[1507]: time="2025-07-09T09:56:04.366269264Z" level=info msg="CreateContainer within sandbox \"49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 09:56:04.379371 containerd[1507]: time="2025-07-09T09:56:04.379315490Z" level=info msg="Container a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:04.385876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135365544.mount: Deactivated successfully. Jul 9 09:56:04.389653 containerd[1507]: time="2025-07-09T09:56:04.389567576Z" level=info msg="CreateContainer within sandbox \"49166175ef8d6e26eadfdffca827d2b06748e37e90805e5e00bc9d618502bd52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a\"" Jul 9 09:56:04.390106 containerd[1507]: time="2025-07-09T09:56:04.390080692Z" level=info msg="StartContainer for \"a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a\"" Jul 9 09:56:04.391448 containerd[1507]: time="2025-07-09T09:56:04.391416082Z" level=info msg="connecting to shim a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a" address="unix:///run/containerd/s/0bea72f80b0543b4587293dfabb895074251b283de3cf8f6194506d4e816a012" protocol=ttrpc version=3 Jul 9 09:56:04.445761 systemd[1]: Started cri-containerd-a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a.scope - libcontainer container a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a. Jul 9 09:56:04.500858 containerd[1507]: time="2025-07-09T09:56:04.500822571Z" level=info msg="StartContainer for \"a40f7a11cf75872956dd59e684eeaeb144b42b55cbbb977708df17b731cbe56a\" returns successfully" Jul 9 09:56:04.792303 kubelet[1840]: E0709 09:56:04.792250 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:05.792754 kubelet[1840]: E0709 09:56:05.792669 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:06.079285 kubelet[1840]: I0709 09:56:06.079182 1840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 09:56:06.793600 kubelet[1840]: E0709 09:56:06.793537 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:07.794680 kubelet[1840]: E0709 09:56:07.794629 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:08.794930 kubelet[1840]: E0709 09:56:08.794857 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:09.795776 kubelet[1840]: E0709 09:56:09.795716 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:10.796682 kubelet[1840]: E0709 09:56:10.796629 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:11.797623 kubelet[1840]: E0709 09:56:11.797552 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:12.144779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961472942.mount: Deactivated successfully. Jul 9 09:56:12.481640 containerd[1507]: time="2025-07-09T09:56:12.481570445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:12.482604 containerd[1507]: time="2025-07-09T09:56:12.482395743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 9 09:56:12.483312 containerd[1507]: time="2025-07-09T09:56:12.483273522Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:12.485332 containerd[1507]: time="2025-07-09T09:56:12.485305445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:12.486328 containerd[1507]: time="2025-07-09T09:56:12.486165143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 8.120662033s" Jul 9 09:56:12.486328 containerd[1507]: time="2025-07-09T09:56:12.486199064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 9 09:56:12.497058 containerd[1507]: time="2025-07-09T09:56:12.496786371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 09:56:12.497843 containerd[1507]: time="2025-07-09T09:56:12.497811512Z" level=info msg="CreateContainer within sandbox \"17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 09:56:12.550604 containerd[1507]: time="2025-07-09T09:56:12.550462959Z" level=info msg="Container 121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:12.607225 containerd[1507]: time="2025-07-09T09:56:12.607172492Z" level=info msg="CreateContainer within sandbox \"17bb2254aa459ceba10eda2485d8ed94bd8f2dc65313d1638e80f0acf2d1aee2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\"" Jul 9 09:56:12.607881 containerd[1507]: time="2025-07-09T09:56:12.607858627Z" level=info msg="StartContainer for \"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\"" Jul 9 09:56:12.609167 containerd[1507]: time="2025-07-09T09:56:12.609131414Z" level=info msg="connecting to shim 121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0" address="unix:///run/containerd/s/aaa4ac42cc4675cc9831363924eba5a7c20e4884dbe4a1523819187d3ac0e076" protocol=ttrpc version=3 Jul 9 09:56:12.634801 systemd[1]: Started cri-containerd-121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0.scope - libcontainer container 121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0. Jul 9 09:56:12.678212 containerd[1507]: time="2025-07-09T09:56:12.678178131Z" level=info msg="StartContainer for \"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\" returns successfully" Jul 9 09:56:12.798979 kubelet[1840]: E0709 09:56:12.798590 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:13.105459 kubelet[1840]: I0709 09:56:13.105220 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ff7476b68-297zz" podStartSLOduration=98.885053023 podStartE2EDuration="1m57.105203426s" podCreationTimestamp="2025-07-09 09:54:16 +0000 UTC" firstStartedPulling="2025-07-09 09:55:46.144528033 +0000 UTC m=+67.150228789" lastFinishedPulling="2025-07-09 09:56:04.364678396 +0000 UTC m=+85.370379192" observedRunningTime="2025-07-09 09:56:05.095531318 +0000 UTC m=+86.101232074" watchObservedRunningTime="2025-07-09 09:56:13.105203426 +0000 UTC m=+94.110904222" Jul 9 09:56:13.233935 containerd[1507]: time="2025-07-09T09:56:13.233890962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\" id:\"015e0797dca7a96007992b421ff2d20e072adecceea5a12051dbdaf3a73796c7\" pid:4114 exit_status:1 exited_at:{seconds:1752054973 nanos:227669114}" Jul 9 09:56:13.798935 kubelet[1840]: E0709 09:56:13.798891 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:14.086971 containerd[1507]: time="2025-07-09T09:56:14.086744901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\" id:\"44d067a32572faaa38142a836d1b2c702df11d6f0814968b418321c1b9d3d888\" pid:4141 exited_at:{seconds:1752054974 nanos:86288131}" Jul 9 09:56:14.105071 kubelet[1840]: I0709 09:56:14.105012 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-sl8nm" podStartSLOduration=88.7209392 podStartE2EDuration="1m54.104975703s" podCreationTimestamp="2025-07-09 09:54:20 +0000 UTC" firstStartedPulling="2025-07-09 09:55:47.112517623 +0000 UTC m=+68.118218379" lastFinishedPulling="2025-07-09 09:56:12.496554086 +0000 UTC m=+93.502254882" observedRunningTime="2025-07-09 09:56:13.108111606 +0000 UTC m=+94.113812402" watchObservedRunningTime="2025-07-09 09:56:14.104975703 +0000 UTC m=+95.110676499" Jul 9 09:56:14.159093 containerd[1507]: time="2025-07-09T09:56:14.159053460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\" id:\"d4bba52fc7e060d8aedf57235fb013424461757529562f80ad251cad6853c452\" pid:4168 exit_status:1 exited_at:{seconds:1752054974 nanos:158772814}" Jul 9 09:56:14.799357 kubelet[1840]: E0709 09:56:14.799298 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:15.170752 containerd[1507]: time="2025-07-09T09:56:15.170430914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\" id:\"3737d04343b9959b7f0115441d822d3c7951f31a879aa80041a14795ef144140\" pid:4191 exit_status:1 exited_at:{seconds:1752054975 nanos:170077428}" Jul 9 09:56:15.799870 kubelet[1840]: E0709 09:56:15.799802 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:16.800699 kubelet[1840]: E0709 09:56:16.800639 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:17.800993 kubelet[1840]: E0709 09:56:17.800929 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:18.802018 kubelet[1840]: E0709 09:56:18.801959 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:19.736777 kubelet[1840]: E0709 09:56:19.736734 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:19.802322 kubelet[1840]: E0709 09:56:19.802276 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:19.886333 containerd[1507]: time="2025-07-09T09:56:19.886289482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:19.886786 containerd[1507]: time="2025-07-09T09:56:19.886738009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 9 09:56:19.893686 containerd[1507]: time="2025-07-09T09:56:19.887431821Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:19.893940 containerd[1507]: time="2025-07-09T09:56:19.889770420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 7.392940529s" Jul 9 09:56:19.893996 containerd[1507]: time="2025-07-09T09:56:19.893942729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 9 09:56:19.894590 containerd[1507]: time="2025-07-09T09:56:19.894519499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:19.895152 containerd[1507]: time="2025-07-09T09:56:19.895120749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 09:56:19.922390 containerd[1507]: time="2025-07-09T09:56:19.922339240Z" level=info msg="CreateContainer within sandbox \"cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 09:56:19.952560 containerd[1507]: time="2025-07-09T09:56:19.952500380Z" level=info msg="Container b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:19.953069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245573502.mount: Deactivated successfully. Jul 9 09:56:19.960546 containerd[1507]: time="2025-07-09T09:56:19.960484913Z" level=info msg="CreateContainer within sandbox \"cf60e9bf89effb77a17ffbc99d7bf7adc15b4dd48a36510b4530ed0db5789f65\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76\"" Jul 9 09:56:19.961104 containerd[1507]: time="2025-07-09T09:56:19.961077642Z" level=info msg="StartContainer for \"b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76\"" Jul 9 09:56:19.963330 containerd[1507]: time="2025-07-09T09:56:19.963295159Z" level=info msg="connecting to shim b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76" address="unix:///run/containerd/s/14adce599da7fb4cf994598e49781deaa92ca77fe523885bac3144932ec76007" protocol=ttrpc version=3 Jul 9 09:56:19.984782 systemd[1]: Started cri-containerd-b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76.scope - libcontainer container b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76. Jul 9 09:56:20.024779 containerd[1507]: time="2025-07-09T09:56:20.024631562Z" level=info msg="StartContainer for \"b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76\" returns successfully" Jul 9 09:56:20.123390 kubelet[1840]: I0709 09:56:20.123300 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bb67b5544-mqk5b" podStartSLOduration=90.386980643 podStartE2EDuration="2m1.123259538s" podCreationTimestamp="2025-07-09 09:54:19 +0000 UTC" firstStartedPulling="2025-07-09 09:55:49.158454287 +0000 UTC m=+70.164155043" lastFinishedPulling="2025-07-09 09:56:19.894733142 +0000 UTC m=+100.900433938" observedRunningTime="2025-07-09 09:56:20.122504806 +0000 UTC m=+101.128205602" watchObservedRunningTime="2025-07-09 09:56:20.123259538 +0000 UTC m=+101.128960334" Jul 9 09:56:20.166872 containerd[1507]: time="2025-07-09T09:56:20.166460509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76\" id:\"a850590631e92ae828e16c1d71917460b5176a5b18725ac6218e7bf7fa629bda\" pid:4263 exit_status:1 exited_at:{seconds:1752054980 nanos:166167584}" Jul 9 09:56:20.698751 kubelet[1840]: I0709 09:56:20.698713 1840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 09:56:20.803465 kubelet[1840]: E0709 09:56:20.803427 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:21.150475 containerd[1507]: time="2025-07-09T09:56:21.149994661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76\" id:\"930d51f24ee14fa0912b17822ef4afcd9418f0f40b80274d9bd39301a744e786\" pid:4344 exited_at:{seconds:1752054981 nanos:149512014}" Jul 9 09:56:21.205054 containerd[1507]: time="2025-07-09T09:56:21.204976708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:21.209202 containerd[1507]: time="2025-07-09T09:56:21.208888488Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 9 09:56:21.211129 containerd[1507]: time="2025-07-09T09:56:21.211097722Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:21.218621 containerd[1507]: time="2025-07-09T09:56:21.218544597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:21.219861 containerd[1507]: time="2025-07-09T09:56:21.219813576Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.324658707s" Jul 9 09:56:21.219934 containerd[1507]: time="2025-07-09T09:56:21.219864697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 09:56:21.221231 containerd[1507]: time="2025-07-09T09:56:21.221203078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 09:56:21.222489 containerd[1507]: time="2025-07-09T09:56:21.222458097Z" level=info msg="CreateContainer within sandbox \"6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 09:56:21.234077 containerd[1507]: time="2025-07-09T09:56:21.233439546Z" level=info msg="Container 14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:21.242080 containerd[1507]: time="2025-07-09T09:56:21.242017038Z" level=info msg="CreateContainer within sandbox \"6e357f7278905813337977bd39deee939062e12d17002a0e5103f91352ecc0f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f\"" Jul 9 09:56:21.242622 containerd[1507]: time="2025-07-09T09:56:21.242597407Z" level=info msg="StartContainer for \"14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f\"" Jul 9 09:56:21.243483 containerd[1507]: time="2025-07-09T09:56:21.243454100Z" level=info msg="connecting to shim 14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f" address="unix:///run/containerd/s/34a2fd0663e592c6cc866b1d36a97d7baecfabd2e51fa2231f3afc57514f70d7" protocol=ttrpc version=3 Jul 9 09:56:21.274780 systemd[1]: Started cri-containerd-14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f.scope - libcontainer container 14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f. Jul 9 09:56:21.305062 containerd[1507]: time="2025-07-09T09:56:21.304948847Z" level=info msg="StartContainer for \"14b8027e9fbc56f3224f24013b21a35e01217c8115a510e6f7db30e4e249106f\" returns successfully" Jul 9 09:56:21.326319 containerd[1507]: time="2025-07-09T09:56:21.325282601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:21.326938 containerd[1507]: time="2025-07-09T09:56:21.326903505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jul 9 09:56:21.335322 containerd[1507]: time="2025-07-09T09:56:21.334128297Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 112.885299ms" Jul 9 09:56:21.335322 containerd[1507]: time="2025-07-09T09:56:21.334177858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 09:56:21.335747 containerd[1507]: time="2025-07-09T09:56:21.335718041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 09:56:21.341633 containerd[1507]: time="2025-07-09T09:56:21.340016387Z" level=info msg="CreateContainer within sandbox \"c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 09:56:21.348621 containerd[1507]: time="2025-07-09T09:56:21.348454037Z" level=info msg="Container d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:21.355507 containerd[1507]: time="2025-07-09T09:56:21.355462505Z" level=info msg="CreateContainer within sandbox \"c260bead9104a1779877ea7ad7886c6174dee2470f485be28d08dd179a013ea8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5\"" Jul 9 09:56:21.356059 containerd[1507]: time="2025-07-09T09:56:21.355959913Z" level=info msg="StartContainer for \"d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5\"" Jul 9 09:56:21.357254 containerd[1507]: time="2025-07-09T09:56:21.356960408Z" level=info msg="connecting to shim d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5" address="unix:///run/containerd/s/b0daec138e64405423522172c1e0a8620a0362c62dbfc5aea5c952f54c155998" protocol=ttrpc version=3 Jul 9 09:56:21.378804 systemd[1]: Started cri-containerd-d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5.scope - libcontainer container d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5. Jul 9 09:56:21.426804 containerd[1507]: time="2025-07-09T09:56:21.426674282Z" level=info msg="StartContainer for \"d45c5ba82b3cad93dcebe3e2b12d502dfe1cdc9ed0f2ecfb842ab8064a2d5cc5\" returns successfully" Jul 9 09:56:21.795915 containerd[1507]: time="2025-07-09T09:56:21.795827566Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:21.799145 containerd[1507]: time="2025-07-09T09:56:21.799089056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 9 09:56:21.801310 containerd[1507]: time="2025-07-09T09:56:21.801210889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 465.455807ms" Jul 9 09:56:21.801310 containerd[1507]: time="2025-07-09T09:56:21.801248170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 09:56:21.802407 containerd[1507]: time="2025-07-09T09:56:21.802369267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 09:56:21.803228 containerd[1507]: time="2025-07-09T09:56:21.803193200Z" level=info msg="CreateContainer within sandbox \"b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 09:56:21.804332 kubelet[1840]: E0709 09:56:21.804306 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:21.827667 containerd[1507]: time="2025-07-09T09:56:21.825711786Z" level=info msg="Container 86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:21.841456 containerd[1507]: time="2025-07-09T09:56:21.841388108Z" level=info msg="CreateContainer within sandbox \"b5663f63c28f2a74bf4c5cf6641e2f6ea2602af173c545a6593cde173767ad81\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac\"" Jul 9 09:56:21.841989 containerd[1507]: time="2025-07-09T09:56:21.841950156Z" level=info msg="StartContainer for \"86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac\"" Jul 9 09:56:21.843341 containerd[1507]: time="2025-07-09T09:56:21.843302577Z" level=info msg="connecting to shim 86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac" address="unix:///run/containerd/s/7a5611c6849de8efa2bd3ac02b171b2bcc8a3e9c44f227a702b7a35abe540ee3" protocol=ttrpc version=3 Jul 9 09:56:21.873855 systemd[1]: Started cri-containerd-86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac.scope - libcontainer container 86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac. Jul 9 09:56:22.045745 containerd[1507]: time="2025-07-09T09:56:22.045699428Z" level=info msg="StartContainer for \"86ddc21c791c8368bbda5bfba9798d0bcb2e4fb252047fea820845db8270aeac\" returns successfully" Jul 9 09:56:22.278829 kubelet[1840]: I0709 09:56:22.278760 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g2tf5" podStartSLOduration=104.274237149 podStartE2EDuration="2m16.278742565s" podCreationTimestamp="2025-07-09 09:54:06 +0000 UTC" firstStartedPulling="2025-07-09 09:55:49.216328216 +0000 UTC m=+70.222029012" lastFinishedPulling="2025-07-09 09:56:21.220833632 +0000 UTC m=+102.226534428" observedRunningTime="2025-07-09 09:56:22.158297778 +0000 UTC m=+103.163998574" watchObservedRunningTime="2025-07-09 09:56:22.278742565 +0000 UTC m=+103.284443361" Jul 9 09:56:22.329382 kubelet[1840]: I0709 09:56:22.328739 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ff7476b68-8zv5b" podStartSLOduration=96.660669534 podStartE2EDuration="2m6.328722146s" podCreationTimestamp="2025-07-09 09:54:16 +0000 UTC" firstStartedPulling="2025-07-09 09:55:52.133894848 +0000 UTC m=+73.139595604" lastFinishedPulling="2025-07-09 09:56:21.80194742 +0000 UTC m=+102.807648216" observedRunningTime="2025-07-09 09:56:22.326368071 +0000 UTC m=+103.332068867" watchObservedRunningTime="2025-07-09 09:56:22.328722146 +0000 UTC m=+103.334422942" Jul 9 09:56:22.329382 kubelet[1840]: I0709 09:56:22.328989 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hqnt5" podStartSLOduration=105.083049774 podStartE2EDuration="2m16.32898315s" podCreationTimestamp="2025-07-09 09:54:06 +0000 UTC" firstStartedPulling="2025-07-09 09:55:50.088955852 +0000 UTC m=+71.094656648" lastFinishedPulling="2025-07-09 09:56:21.334889268 +0000 UTC m=+102.340590024" observedRunningTime="2025-07-09 09:56:22.27911209 +0000 UTC m=+103.284812886" watchObservedRunningTime="2025-07-09 09:56:22.32898315 +0000 UTC m=+103.334683946" Jul 9 09:56:22.805408 kubelet[1840]: E0709 09:56:22.805299 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:22.832474 containerd[1507]: time="2025-07-09T09:56:22.831910930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:22.833133 containerd[1507]: time="2025-07-09T09:56:22.832706621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 9 09:56:22.833548 containerd[1507]: time="2025-07-09T09:56:22.833523314Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:22.835912 containerd[1507]: time="2025-07-09T09:56:22.835885829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:22.836855 containerd[1507]: time="2025-07-09T09:56:22.836814522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.034392014s" Jul 9 09:56:22.836920 containerd[1507]: time="2025-07-09T09:56:22.836845803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 9 09:56:22.838335 containerd[1507]: time="2025-07-09T09:56:22.838312905Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 9 09:56:22.840603 containerd[1507]: time="2025-07-09T09:56:22.840566578Z" level=info msg="CreateContainer within sandbox \"54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 09:56:22.849776 containerd[1507]: time="2025-07-09T09:56:22.849307028Z" level=info msg="Container 493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:22.856505 containerd[1507]: time="2025-07-09T09:56:22.856464414Z" level=info msg="CreateContainer within sandbox \"54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167\"" Jul 9 09:56:22.857236 containerd[1507]: time="2025-07-09T09:56:22.857204305Z" level=info msg="StartContainer for \"493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167\"" Jul 9 09:56:22.866379 containerd[1507]: time="2025-07-09T09:56:22.866298720Z" level=info msg="connecting to shim 493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167" address="unix:///run/containerd/s/c7fcbec433d36cdc7dc28c4631f80ae1e783e0cf46884e400ffff03323a42dfd" protocol=ttrpc version=3 Jul 9 09:56:22.892776 systemd[1]: Started cri-containerd-493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167.scope - libcontainer container 493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167. Jul 9 09:56:22.954174 containerd[1507]: time="2025-07-09T09:56:22.954134462Z" level=info msg="StartContainer for \"493f8610caa024926766e725f0ec053ead59802f1e1c46d597f0237a28a90167\" returns successfully" Jul 9 09:56:23.806206 kubelet[1840]: E0709 09:56:23.806149 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:24.806559 kubelet[1840]: E0709 09:56:24.806505 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:25.806712 kubelet[1840]: E0709 09:56:25.806662 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:26.807009 kubelet[1840]: E0709 09:56:26.806960 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:27.808101 kubelet[1840]: E0709 09:56:27.808041 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:28.608897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036661725.mount: Deactivated successfully. Jul 9 09:56:28.809099 kubelet[1840]: E0709 09:56:28.809057 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:29.810209 kubelet[1840]: E0709 09:56:29.810147 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:30.725203 containerd[1507]: time="2025-07-09T09:56:30.725149838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\" id:\"2ef71839107e045ef6d88dc9fb6e1312b8d1bbd937814d78f7f7d4bbe38fdb98\" pid:4530 exited_at:{seconds:1752054990 nanos:724688193}" Jul 9 09:56:30.810962 kubelet[1840]: E0709 09:56:30.810914 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:31.811341 kubelet[1840]: E0709 09:56:31.811290 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:32.238407 containerd[1507]: time="2025-07-09T09:56:32.238358341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:32.239026 containerd[1507]: time="2025-07-09T09:56:32.238991667Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69964585" Jul 9 09:56:32.239921 containerd[1507]: time="2025-07-09T09:56:32.239892596Z" level=info msg="ImageCreate event name:\"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:32.243412 containerd[1507]: time="2025-07-09T09:56:32.243369751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:32.244049 containerd[1507]: time="2025-07-09T09:56:32.243917597Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\", size \"69964463\" in 9.405441889s" Jul 9 09:56:32.244049 containerd[1507]: time="2025-07-09T09:56:32.243959997Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 9 09:56:32.245771 containerd[1507]: time="2025-07-09T09:56:32.245729095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 09:56:32.246535 containerd[1507]: time="2025-07-09T09:56:32.246507303Z" level=info msg="CreateContainer within sandbox \"a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 9 09:56:32.252184 containerd[1507]: time="2025-07-09T09:56:32.252151959Z" level=info msg="Container e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:32.260415 containerd[1507]: time="2025-07-09T09:56:32.260294561Z" level=info msg="CreateContainer within sandbox \"a81e42024af04dcc60432bcac825f0ec7880b0c913c5d1f9573375b856da0c91\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36\"" Jul 9 09:56:32.261116 containerd[1507]: time="2025-07-09T09:56:32.261068009Z" level=info msg="StartContainer for \"e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36\"" Jul 9 09:56:32.261923 containerd[1507]: time="2025-07-09T09:56:32.261888337Z" level=info msg="connecting to shim e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36" address="unix:///run/containerd/s/028d8fa3feb88663a615ff6dec1bd89f31a8d9057afb73a29851fa4b8732c23a" protocol=ttrpc version=3 Jul 9 09:56:32.289836 systemd[1]: Started cri-containerd-e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36.scope - libcontainer container e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36. Jul 9 09:56:32.322591 containerd[1507]: time="2025-07-09T09:56:32.321845540Z" level=info msg="StartContainer for \"e843e8f0b856ecafc480781be404131bb1f6790d9b477e39550059c2ba885d36\" returns successfully" Jul 9 09:56:32.811707 kubelet[1840]: E0709 09:56:32.811661 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:33.171948 kubelet[1840]: I0709 09:56:33.171793 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-8xtsz" podStartSLOduration=29.11451008 podStartE2EDuration="1m6.171775181s" podCreationTimestamp="2025-07-09 09:55:27 +0000 UTC" firstStartedPulling="2025-07-09 09:55:55.187664466 +0000 UTC m=+76.193365222" lastFinishedPulling="2025-07-09 09:56:32.244929527 +0000 UTC m=+113.250630323" observedRunningTime="2025-07-09 09:56:33.171089974 +0000 UTC m=+114.176790730" watchObservedRunningTime="2025-07-09 09:56:33.171775181 +0000 UTC m=+114.177475977" Jul 9 09:56:33.812534 kubelet[1840]: E0709 09:56:33.812491 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:34.812981 kubelet[1840]: E0709 09:56:34.812931 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:35.813327 kubelet[1840]: E0709 09:56:35.813278 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:36.813786 kubelet[1840]: E0709 09:56:36.813738 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:37.024398 systemd[1]: Created slice kubepods-besteffort-pod20f58858_6513_4c00_bb11_f91bed5ac26a.slice - libcontainer container kubepods-besteffort-pod20f58858_6513_4c00_bb11_f91bed5ac26a.slice. Jul 9 09:56:37.113387 kubelet[1840]: I0709 09:56:37.113249 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvtd\" (UniqueName: \"kubernetes.io/projected/20f58858-6513-4c00-bb11-f91bed5ac26a-kube-api-access-zdvtd\") pod \"nfs-server-provisioner-0\" (UID: \"20f58858-6513-4c00-bb11-f91bed5ac26a\") " pod="default/nfs-server-provisioner-0" Jul 9 09:56:37.113387 kubelet[1840]: I0709 09:56:37.113296 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/20f58858-6513-4c00-bb11-f91bed5ac26a-data\") pod \"nfs-server-provisioner-0\" (UID: \"20f58858-6513-4c00-bb11-f91bed5ac26a\") " pod="default/nfs-server-provisioner-0" Jul 9 09:56:37.327043 containerd[1507]: time="2025-07-09T09:56:37.326981919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:20f58858-6513-4c00-bb11-f91bed5ac26a,Namespace:default,Attempt:0,}" Jul 9 09:56:37.455860 systemd-networkd[1424]: cali60e51b789ff: Link UP Jul 9 09:56:37.456018 systemd-networkd[1424]: cali60e51b789ff: Gained carrier Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.368 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 20f58858-6513-4c00-bb11-f91bed5ac26a 1427 0 2025-07-09 09:56:37 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.66 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.369 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.405 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" HandleID="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Workload="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.405 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" HandleID="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Workload="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001177b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.66", "pod":"nfs-server-provisioner-0", "timestamp":"2025-07-09 09:56:37.405205679 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.405 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.405 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.405 [INFO][4641] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.418 [INFO][4641] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.424 [INFO][4641] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.428 [INFO][4641] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.431 [INFO][4641] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.434 [INFO][4641] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.434 [INFO][4641] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.436 [INFO][4641] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330 Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.441 [INFO][4641] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.451 [INFO][4641] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.10/26] block=192.168.123.0/26 handle="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.451 [INFO][4641] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.10/26] handle="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" host="10.0.0.66" Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.451 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:56:37.473248 containerd[1507]: 2025-07-09 09:56:37.451 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.10/26] IPv6=[] ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" HandleID="k8s-pod-network.1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Workload="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.474358 containerd[1507]: 2025-07-09 09:56:37.453 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"20f58858-6513-4c00-bb11-f91bed5ac26a", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.123.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:56:37.474358 containerd[1507]: 2025-07-09 09:56:37.453 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.10/32] ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.474358 containerd[1507]: 2025-07-09 09:56:37.453 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.474358 containerd[1507]: 2025-07-09 09:56:37.455 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.474667 containerd[1507]: 2025-07-09 09:56:37.456 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"20f58858-6513-4c00-bb11-f91bed5ac26a", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.123.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"02:fc:ff:a4:fd:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:56:37.474667 containerd[1507]: 2025-07-09 09:56:37.470 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.66-k8s-nfs--server--provisioner--0-eth0" Jul 9 09:56:37.508360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19069378.mount: Deactivated successfully. Jul 9 09:56:37.525117 containerd[1507]: time="2025-07-09T09:56:37.525068580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:37.526324 containerd[1507]: time="2025-07-09T09:56:37.526124469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 9 09:56:37.526960 containerd[1507]: time="2025-07-09T09:56:37.526913795Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:37.529586 containerd[1507]: time="2025-07-09T09:56:37.529546097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:37.530264 containerd[1507]: time="2025-07-09T09:56:37.530237862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 5.284470327s" Jul 9 09:56:37.530320 containerd[1507]: time="2025-07-09T09:56:37.530269743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 9 09:56:37.531475 containerd[1507]: time="2025-07-09T09:56:37.531448632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 09:56:37.533595 containerd[1507]: time="2025-07-09T09:56:37.533254767Z" level=info msg="CreateContainer within sandbox \"3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 09:56:37.533836 containerd[1507]: time="2025-07-09T09:56:37.533807852Z" level=info msg="connecting to shim 1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330" address="unix:///run/containerd/s/9237129fa8c879088953a00be6ba3b671af36cd272d79061fe79092927e757c2" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:56:37.540327 containerd[1507]: time="2025-07-09T09:56:37.540287945Z" level=info msg="Container e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:37.547434 containerd[1507]: time="2025-07-09T09:56:37.547375283Z" level=info msg="CreateContainer within sandbox \"3c01822c7b7a140597cfec1fe9d6a5b22299b90f714e82a437ae6b6b9b6b3174\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47\"" Jul 9 09:56:37.549193 containerd[1507]: time="2025-07-09T09:56:37.548797574Z" level=info msg="StartContainer for \"e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47\"" Jul 9 09:56:37.554197 containerd[1507]: time="2025-07-09T09:56:37.554155818Z" level=info msg="connecting to shim e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47" address="unix:///run/containerd/s/fbe974d89a1404b654e8cedba54fe5b2533666572bbf929e2475bfaad44c87db" protocol=ttrpc version=3 Jul 9 09:56:37.561742 systemd[1]: Started cri-containerd-1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330.scope - libcontainer container 1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330. Jul 9 09:56:37.581897 systemd[1]: Started cri-containerd-e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47.scope - libcontainer container e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47. Jul 9 09:56:37.586150 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:56:37.617178 containerd[1507]: time="2025-07-09T09:56:37.617133413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:20f58858-6513-4c00-bb11-f91bed5ac26a,Namespace:default,Attempt:0,} returns sandbox id \"1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330\"" Jul 9 09:56:37.627339 containerd[1507]: time="2025-07-09T09:56:37.627246656Z" level=info msg="StartContainer for \"e69c7c88f74e0586c884210d76ecd77aa4fff966c831622c8a13cfab90e21b47\" returns successfully" Jul 9 09:56:37.814100 kubelet[1840]: E0709 09:56:37.813917 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:38.699809 systemd-networkd[1424]: cali60e51b789ff: Gained IPv6LL Jul 9 09:56:38.814635 kubelet[1840]: E0709 09:56:38.814567 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:39.737743 kubelet[1840]: E0709 09:56:39.737689 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:39.815764 kubelet[1840]: E0709 09:56:39.815711 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:40.816501 kubelet[1840]: E0709 09:56:40.816433 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:41.817505 kubelet[1840]: E0709 09:56:41.817443 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:42.656094 containerd[1507]: time="2025-07-09T09:56:42.656022811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:42.656706 containerd[1507]: time="2025-07-09T09:56:42.656668896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 9 09:56:42.657403 containerd[1507]: time="2025-07-09T09:56:42.657359780Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:42.659781 containerd[1507]: time="2025-07-09T09:56:42.659740556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:42.660836 containerd[1507]: time="2025-07-09T09:56:42.660751043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 5.12927169s" Jul 9 09:56:42.660883 containerd[1507]: time="2025-07-09T09:56:42.660841323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 9 09:56:42.661632 containerd[1507]: time="2025-07-09T09:56:42.661600128Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 9 09:56:42.665383 containerd[1507]: time="2025-07-09T09:56:42.663809743Z" level=info msg="CreateContainer within sandbox \"54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 09:56:42.672419 containerd[1507]: time="2025-07-09T09:56:42.671056470Z" level=info msg="Container adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:42.678891 containerd[1507]: time="2025-07-09T09:56:42.678825041Z" level=info msg="CreateContainer within sandbox \"54584a63c135ad9e7c71ad6d62b90e5051c8977495d71daf15b17358a9bfa583\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210\"" Jul 9 09:56:42.679588 containerd[1507]: time="2025-07-09T09:56:42.679494486Z" level=info msg="StartContainer for \"adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210\"" Jul 9 09:56:42.681328 containerd[1507]: time="2025-07-09T09:56:42.681275698Z" level=info msg="connecting to shim adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210" address="unix:///run/containerd/s/c7fcbec433d36cdc7dc28c4631f80ae1e783e0cf46884e400ffff03323a42dfd" protocol=ttrpc version=3 Jul 9 09:56:42.709796 systemd[1]: Started cri-containerd-adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210.scope - libcontainer container adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210. Jul 9 09:56:42.771864 containerd[1507]: time="2025-07-09T09:56:42.771823974Z" level=info msg="StartContainer for \"adfcad4fa4a7b5c5961bca1105a63529b7c6d0242cb637ff6345b086f7b1f210\" returns successfully" Jul 9 09:56:42.818220 kubelet[1840]: E0709 09:56:42.818177 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:42.990715 kubelet[1840]: I0709 09:56:42.990669 1840 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 09:56:42.993458 kubelet[1840]: I0709 09:56:42.993419 1840 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 09:56:43.196162 kubelet[1840]: I0709 09:56:43.195716 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s5fkc" podStartSLOduration=75.645958394 podStartE2EDuration="2m4.195698427s" podCreationTimestamp="2025-07-09 09:54:39 +0000 UTC" firstStartedPulling="2025-07-09 09:55:54.111759974 +0000 UTC m=+75.117460770" lastFinishedPulling="2025-07-09 09:56:42.661500007 +0000 UTC m=+123.667200803" observedRunningTime="2025-07-09 09:56:43.194806381 +0000 UTC m=+124.200507177" watchObservedRunningTime="2025-07-09 09:56:43.195698427 +0000 UTC m=+124.201399223" Jul 9 09:56:43.196162 kubelet[1840]: I0709 09:56:43.195860 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79847b74bf-2kn6c" podStartSLOduration=6.333050896 podStartE2EDuration="1m0.195856188s" podCreationTimestamp="2025-07-09 09:55:43 +0000 UTC" firstStartedPulling="2025-07-09 09:55:43.668299537 +0000 UTC m=+64.674000333" lastFinishedPulling="2025-07-09 09:56:37.531104829 +0000 UTC m=+118.536805625" observedRunningTime="2025-07-09 09:56:38.177423737 +0000 UTC m=+119.183124573" watchObservedRunningTime="2025-07-09 09:56:43.195856188 +0000 UTC m=+124.201556984" Jul 9 09:56:43.819112 kubelet[1840]: E0709 09:56:43.819062 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:44.162970 containerd[1507]: time="2025-07-09T09:56:44.162854746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcd93e161c44d345593d52856cb7ff480990f00a0acb4c54bb2d647cad04d271\" id:\"f4d085347e61f827ebcced3a0d86bdcee4e62fc6cca0505c8b7dd54a6ee0f57d\" pid:4805 exited_at:{seconds:1752055004 nanos:162533704}" Jul 9 09:56:44.820111 kubelet[1840]: E0709 09:56:44.820020 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:45.000069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409316950.mount: Deactivated successfully. Jul 9 09:56:45.188640 containerd[1507]: time="2025-07-09T09:56:45.188602619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"121f242994033a24347c41fb3d7d7bf0e685d20a2bc0f5575fd713f99ce10de0\" id:\"8085629db25827f835d7f4179f86f13698e9bf8f36e195b1bbb6c93c7a6871b5\" pid:4840 exited_at:{seconds:1752055005 nanos:188300217}" Jul 9 09:56:45.820679 kubelet[1840]: E0709 09:56:45.820639 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:46.475341 containerd[1507]: time="2025-07-09T09:56:46.475291235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:46.476334 containerd[1507]: time="2025-07-09T09:56:46.476299961Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jul 9 09:56:46.479593 containerd[1507]: time="2025-07-09T09:56:46.477070605Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:46.481067 containerd[1507]: time="2025-07-09T09:56:46.481017426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:46.483335 containerd[1507]: time="2025-07-09T09:56:46.483300079Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.821667511s" Jul 9 09:56:46.483335 containerd[1507]: time="2025-07-09T09:56:46.483335279Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 9 09:56:46.504074 containerd[1507]: time="2025-07-09T09:56:46.502285663Z" level=info msg="CreateContainer within sandbox \"1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 9 09:56:46.516230 containerd[1507]: time="2025-07-09T09:56:46.515401935Z" level=info msg="Container a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:46.517063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241976433.mount: Deactivated successfully. Jul 9 09:56:46.523123 containerd[1507]: time="2025-07-09T09:56:46.523064697Z" level=info msg="CreateContainer within sandbox \"1f5be71b7d9d3a141fe89cb6fb7f67acb4958fffee5871112d2e1bcfc297c330\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e\"" Jul 9 09:56:46.523956 containerd[1507]: time="2025-07-09T09:56:46.523919901Z" level=info msg="StartContainer for \"a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e\"" Jul 9 09:56:46.525053 containerd[1507]: time="2025-07-09T09:56:46.525023547Z" level=info msg="connecting to shim a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e" address="unix:///run/containerd/s/9237129fa8c879088953a00be6ba3b671af36cd272d79061fe79092927e757c2" protocol=ttrpc version=3 Jul 9 09:56:46.553789 systemd[1]: Started cri-containerd-a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e.scope - libcontainer container a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e. Jul 9 09:56:46.585590 containerd[1507]: time="2025-07-09T09:56:46.585527798Z" level=info msg="StartContainer for \"a3111e4990ea0fba34204a3223049130f4e227f3a2a848b2a0206485db8a1b9e\" returns successfully" Jul 9 09:56:46.822314 kubelet[1840]: E0709 09:56:46.822152 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:47.245103 kubelet[1840]: I0709 09:56:47.245019 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.379156965 podStartE2EDuration="10.244992545s" podCreationTimestamp="2025-07-09 09:56:37 +0000 UTC" firstStartedPulling="2025-07-09 09:56:37.618384784 +0000 UTC m=+118.624085580" lastFinishedPulling="2025-07-09 09:56:46.484220364 +0000 UTC m=+127.489921160" observedRunningTime="2025-07-09 09:56:47.244750944 +0000 UTC m=+128.250451740" watchObservedRunningTime="2025-07-09 09:56:47.244992545 +0000 UTC m=+128.250693301" Jul 9 09:56:47.822521 kubelet[1840]: E0709 09:56:47.822480 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:48.823363 kubelet[1840]: E0709 09:56:48.823309 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:49.824129 kubelet[1840]: E0709 09:56:49.824072 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:50.824553 kubelet[1840]: E0709 09:56:50.824509 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:51.151610 containerd[1507]: time="2025-07-09T09:56:51.151448643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b09dc8d319b941ee42b8481ddbd31d75f4645ee090b89582f412fab2dc7fcf76\" id:\"0a06b1e1bb8b92d4227061b33432d6be8244d6b118213104951ce4a2d2fe1698\" pid:4952 exited_at:{seconds:1752055011 nanos:151042481}" Jul 9 09:56:51.825491 kubelet[1840]: E0709 09:56:51.825437 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:52.826172 kubelet[1840]: E0709 09:56:52.826121 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:53.826589 kubelet[1840]: E0709 09:56:53.826523 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:54.827700 kubelet[1840]: E0709 09:56:54.827656 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:55.828555 kubelet[1840]: E0709 09:56:55.828505 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:56.049068 systemd[1]: Created slice kubepods-besteffort-podf3faff87_ad47_414f_b0a3_e530ac7a0ee6.slice - libcontainer container kubepods-besteffort-podf3faff87_ad47_414f_b0a3_e530ac7a0ee6.slice. Jul 9 09:56:56.145028 kubelet[1840]: I0709 09:56:56.144675 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78r8v\" (UniqueName: \"kubernetes.io/projected/f3faff87-ad47-414f-b0a3-e530ac7a0ee6-kube-api-access-78r8v\") pod \"test-pod-1\" (UID: \"f3faff87-ad47-414f-b0a3-e530ac7a0ee6\") " pod="default/test-pod-1" Jul 9 09:56:56.145028 kubelet[1840]: I0709 09:56:56.144725 1840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-665353d8-3264-406a-bb72-2808f3cfe5f0\" (UniqueName: \"kubernetes.io/nfs/f3faff87-ad47-414f-b0a3-e530ac7a0ee6-pvc-665353d8-3264-406a-bb72-2808f3cfe5f0\") pod \"test-pod-1\" (UID: \"f3faff87-ad47-414f-b0a3-e530ac7a0ee6\") " pod="default/test-pod-1" Jul 9 09:56:56.289652 kernel: netfs: FS-Cache loaded Jul 9 09:56:56.319890 kernel: RPC: Registered named UNIX socket transport module. Jul 9 09:56:56.320062 kernel: RPC: Registered udp transport module. Jul 9 09:56:56.320084 kernel: RPC: Registered tcp transport module. Jul 9 09:56:56.320097 kernel: RPC: Registered tcp-with-tls transport module. Jul 9 09:56:56.320927 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 9 09:56:56.501958 kernel: NFS: Registering the id_resolver key type Jul 9 09:56:56.502079 kernel: Key type id_resolver registered Jul 9 09:56:56.502097 kernel: Key type id_legacy registered Jul 9 09:56:56.523353 nfsidmap[4986]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jul 9 09:56:56.524032 nfsidmap[4986]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 9 09:56:56.527527 nfsidmap[4987]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jul 9 09:56:56.527698 nfsidmap[4987]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 9 09:56:56.535066 nfsrahead[4989]: setting /var/lib/kubelet/pods/f3faff87-ad47-414f-b0a3-e530ac7a0ee6/volumes/kubernetes.io~nfs/pvc-665353d8-3264-406a-bb72-2808f3cfe5f0 readahead to 128 Jul 9 09:56:56.652381 containerd[1507]: time="2025-07-09T09:56:56.652339585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f3faff87-ad47-414f-b0a3-e530ac7a0ee6,Namespace:default,Attempt:0,}" Jul 9 09:56:56.753610 systemd-networkd[1424]: cali5ec59c6bf6e: Link UP Jul 9 09:56:56.754049 systemd-networkd[1424]: cali5ec59c6bf6e: Gained carrier Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.690 [INFO][4990] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.66-k8s-test--pod--1-eth0 default f3faff87-ad47-414f-b0a3-e530ac7a0ee6 1522 0 2025-07-09 09:56:37 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.66 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.690 [INFO][4990] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.712 [INFO][5004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" HandleID="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Workload="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.712 [INFO][5004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" HandleID="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Workload="10.0.0.66-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd600), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.66", "pod":"test-pod-1", "timestamp":"2025-07-09 09:56:56.7126421 +0000 UTC"}, Hostname:"10.0.0.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.712 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.712 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.712 [INFO][5004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.66' Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.722 [INFO][5004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.727 [INFO][5004] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.731 [INFO][5004] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.732 [INFO][5004] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.736 [INFO][5004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.736 [INFO][5004] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.738 [INFO][5004] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281 Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.741 [INFO][5004] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.747 [INFO][5004] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.123.11/26] block=192.168.123.0/26 handle="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.747 [INFO][5004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.11/26] handle="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" host="10.0.0.66" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.747 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.747 [INFO][5004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.123.11/26] IPv6=[] ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" HandleID="k8s-pod-network.ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Workload="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.768155 containerd[1507]: 2025-07-09 09:56:56.749 [INFO][4990] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f3faff87-ad47-414f-b0a3-e530ac7a0ee6", ResourceVersion:"1522", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.123.11/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:56:56.768891 containerd[1507]: 2025-07-09 09:56:56.750 [INFO][4990] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.11/32] ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.768891 containerd[1507]: 2025-07-09 09:56:56.750 [INFO][4990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.768891 containerd[1507]: 2025-07-09 09:56:56.755 [INFO][4990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.768891 containerd[1507]: 2025-07-09 09:56:56.756 [INFO][4990] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f3faff87-ad47-414f-b0a3-e530ac7a0ee6", ResourceVersion:"1522", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.66", ContainerID:"ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.123.11/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:2d:ef:31:76:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:56:56.768891 containerd[1507]: 2025-07-09 09:56:56.766 [INFO][4990] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.66-k8s-test--pod--1-eth0" Jul 9 09:56:56.794802 containerd[1507]: time="2025-07-09T09:56:56.794089605Z" level=info msg="connecting to shim ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281" address="unix:///run/containerd/s/f27fe6f746dda51a2832b32b15aa2765eefcfe50bb8deb0e77c206567a03359c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:56:56.818825 systemd[1]: Started cri-containerd-ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281.scope - libcontainer container ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281. Jul 9 09:56:56.829648 kubelet[1840]: E0709 09:56:56.829606 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:56.829857 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:56:56.857983 containerd[1507]: time="2025-07-09T09:56:56.857933612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f3faff87-ad47-414f-b0a3-e530ac7a0ee6,Namespace:default,Attempt:0,} returns sandbox id \"ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281\"" Jul 9 09:56:56.859097 containerd[1507]: time="2025-07-09T09:56:56.859050575Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 9 09:56:57.524269 containerd[1507]: time="2025-07-09T09:56:57.524215756Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:57.525016 containerd[1507]: time="2025-07-09T09:56:57.524976598Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 9 09:56:57.527517 containerd[1507]: time="2025-07-09T09:56:57.527450045Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd\", size \"69964463\" in 668.37071ms" Jul 9 09:56:57.527517 containerd[1507]: time="2025-07-09T09:56:57.527489006Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 9 09:56:57.530887 containerd[1507]: time="2025-07-09T09:56:57.530371974Z" level=info msg="CreateContainer within sandbox \"ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 9 09:56:57.540717 containerd[1507]: time="2025-07-09T09:56:57.540676646Z" level=info msg="Container bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:56:57.548210 containerd[1507]: time="2025-07-09T09:56:57.548159109Z" level=info msg="CreateContainer within sandbox \"ee2860fa0747fdc799d69f0fd18921d7ca8762722ad9437d533d4aa192281281\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250\"" Jul 9 09:56:57.548681 containerd[1507]: time="2025-07-09T09:56:57.548646190Z" level=info msg="StartContainer for \"bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250\"" Jul 9 09:56:57.549538 containerd[1507]: time="2025-07-09T09:56:57.549497433Z" level=info msg="connecting to shim bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250" address="unix:///run/containerd/s/f27fe6f746dda51a2832b32b15aa2765eefcfe50bb8deb0e77c206567a03359c" protocol=ttrpc version=3 Jul 9 09:56:57.580842 systemd[1]: Started cri-containerd-bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250.scope - libcontainer container bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250. Jul 9 09:56:57.611174 containerd[1507]: time="2025-07-09T09:56:57.610899461Z" level=info msg="StartContainer for \"bd83807f9ecea89af4427d6e5dc658876f6f0d210ed8446284a71c5e7d9ec250\" returns successfully" Jul 9 09:56:57.830747 kubelet[1840]: E0709 09:56:57.830623 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:58.027996 systemd-networkd[1424]: cali5ec59c6bf6e: Gained IPv6LL Jul 9 09:56:58.226422 kubelet[1840]: I0709 09:56:58.226355 1840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.556489787 podStartE2EDuration="21.226336942s" podCreationTimestamp="2025-07-09 09:56:37 +0000 UTC" firstStartedPulling="2025-07-09 09:56:56.858688414 +0000 UTC m=+137.864389210" lastFinishedPulling="2025-07-09 09:56:57.528535569 +0000 UTC m=+138.534236365" observedRunningTime="2025-07-09 09:56:58.22582218 +0000 UTC m=+139.231522976" watchObservedRunningTime="2025-07-09 09:56:58.226336942 +0000 UTC m=+139.232037738" Jul 9 09:56:58.831215 kubelet[1840]: E0709 09:56:58.831163 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:59.737680 kubelet[1840]: E0709 09:56:59.737631 1840 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 9 09:56:59.831837 kubelet[1840]: E0709 09:56:59.831789 1840 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"