Sep 4 15:46:05.767894 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 15:46:05.767917 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Sep 4 14:32:27 -00 2025 Sep 4 15:46:05.767926 kernel: KASLR enabled Sep 4 15:46:05.767932 kernel: efi: EFI v2.7 by EDK II Sep 4 15:46:05.767938 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 4 15:46:05.767944 kernel: random: crng init done Sep 4 15:46:05.767951 kernel: secureboot: Secure boot disabled Sep 4 15:46:05.767957 kernel: ACPI: Early table checksum verification disabled Sep 4 15:46:05.767964 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 4 15:46:05.767970 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 15:46:05.767977 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.767983 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.767989 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.767995 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.768004 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.768010 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.768017 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.768023 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.768030 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 15:46:05.768036 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 15:46:05.768042 kernel: ACPI: Use ACPI SPCR as default console: No Sep 4 15:46:05.768049 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 15:46:05.768057 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 4 15:46:05.768063 kernel: Zone ranges: Sep 4 15:46:05.768069 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 15:46:05.768076 kernel: DMA32 empty Sep 4 15:46:05.768082 kernel: Normal empty Sep 4 15:46:05.768088 kernel: Device empty Sep 4 15:46:05.768094 kernel: Movable zone start for each node Sep 4 15:46:05.768100 kernel: Early memory node ranges Sep 4 15:46:05.768107 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 4 15:46:05.768113 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 4 15:46:05.768119 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 4 15:46:05.768126 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 4 15:46:05.768133 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 4 15:46:05.768140 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 4 15:46:05.768146 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 4 15:46:05.768152 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 4 15:46:05.768159 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 4 15:46:05.768172 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 15:46:05.768183 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 15:46:05.768190 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 15:46:05.768197 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 15:46:05.768203 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 15:46:05.768210 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 15:46:05.768217 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 4 15:46:05.768224 kernel: psci: probing for conduit method from ACPI. Sep 4 15:46:05.768230 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 15:46:05.768238 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 15:46:05.768245 kernel: psci: Trusted OS migration not required Sep 4 15:46:05.768252 kernel: psci: SMC Calling Convention v1.1 Sep 4 15:46:05.768259 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 15:46:05.768265 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 4 15:46:05.768272 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 4 15:46:05.768279 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 15:46:05.768286 kernel: Detected PIPT I-cache on CPU0 Sep 4 15:46:05.768292 kernel: CPU features: detected: GIC system register CPU interface Sep 4 15:46:05.768299 kernel: CPU features: detected: Spectre-v4 Sep 4 15:46:05.768347 kernel: CPU features: detected: Spectre-BHB Sep 4 15:46:05.768356 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 15:46:05.768363 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 15:46:05.768370 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 15:46:05.768377 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 15:46:05.768384 kernel: alternatives: applying boot alternatives Sep 4 15:46:05.768391 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa24154aac6dc1a5d38cdc5f4cdc1aea124b2960632298191d9d7d9a2320138a Sep 4 15:46:05.768399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 15:46:05.768412 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 15:46:05.768420 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 15:46:05.768427 kernel: Fallback order for Node 0: 0 Sep 4 15:46:05.768436 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 4 15:46:05.768443 kernel: Policy zone: DMA Sep 4 15:46:05.768449 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 15:46:05.768456 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 4 15:46:05.768463 kernel: software IO TLB: area num 4. Sep 4 15:46:05.768470 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 4 15:46:05.768476 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 4 15:46:05.768483 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 15:46:05.768490 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 15:46:05.768497 kernel: rcu: RCU event tracing is enabled. Sep 4 15:46:05.768504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 15:46:05.768513 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 15:46:05.768520 kernel: Tracing variant of Tasks RCU enabled. Sep 4 15:46:05.768527 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 15:46:05.768533 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 15:46:05.768540 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 15:46:05.768547 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 15:46:05.768554 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 15:46:05.768561 kernel: GICv3: 256 SPIs implemented Sep 4 15:46:05.768568 kernel: GICv3: 0 Extended SPIs implemented Sep 4 15:46:05.768574 kernel: Root IRQ handler: gic_handle_irq Sep 4 15:46:05.768581 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 15:46:05.768589 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 4 15:46:05.768596 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 15:46:05.768602 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 15:46:05.768609 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 4 15:46:05.768616 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 4 15:46:05.768623 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 4 15:46:05.768630 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 4 15:46:05.768637 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 15:46:05.768643 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:46:05.768650 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 15:46:05.768657 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 15:46:05.768666 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 15:46:05.768672 kernel: arm-pv: using stolen time PV Sep 4 15:46:05.768680 kernel: Console: colour dummy device 80x25 Sep 4 15:46:05.768687 kernel: ACPI: Core revision 20240827 Sep 4 15:46:05.768694 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 15:46:05.768702 kernel: pid_max: default: 32768 minimum: 301 Sep 4 15:46:05.768709 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 15:46:05.768716 kernel: landlock: Up and running. Sep 4 15:46:05.768724 kernel: SELinux: Initializing. Sep 4 15:46:05.768731 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 15:46:05.768738 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 15:46:05.768746 kernel: rcu: Hierarchical SRCU implementation. Sep 4 15:46:05.768753 kernel: rcu: Max phase no-delay instances is 400. Sep 4 15:46:05.768760 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 15:46:05.768767 kernel: Remapping and enabling EFI services. Sep 4 15:46:05.768776 kernel: smp: Bringing up secondary CPUs ... Sep 4 15:46:05.768788 kernel: Detected PIPT I-cache on CPU1 Sep 4 15:46:05.768795 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 15:46:05.768804 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 4 15:46:05.768812 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:46:05.768819 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 15:46:05.768827 kernel: Detected PIPT I-cache on CPU2 Sep 4 15:46:05.768834 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 15:46:05.768844 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 4 15:46:05.768851 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:46:05.768858 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 15:46:05.768866 kernel: Detected PIPT I-cache on CPU3 Sep 4 15:46:05.768873 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 15:46:05.768881 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 4 15:46:05.768889 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 15:46:05.768897 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 15:46:05.768904 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 15:46:05.768911 kernel: SMP: Total of 4 processors activated. Sep 4 15:46:05.768918 kernel: CPU: All CPU(s) started at EL1 Sep 4 15:46:05.768926 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 15:46:05.768933 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 15:46:05.768942 kernel: CPU features: detected: Common not Private translations Sep 4 15:46:05.768949 kernel: CPU features: detected: CRC32 instructions Sep 4 15:46:05.768957 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 15:46:05.768964 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 15:46:05.768971 kernel: CPU features: detected: LSE atomic instructions Sep 4 15:46:05.768979 kernel: CPU features: detected: Privileged Access Never Sep 4 15:46:05.768986 kernel: CPU features: detected: RAS Extension Support Sep 4 15:46:05.768993 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 15:46:05.769003 kernel: alternatives: applying system-wide alternatives Sep 4 15:46:05.769010 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 4 15:46:05.769018 kernel: Memory: 2424352K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 39104K init, 1038K bss, 125600K reserved, 16384K cma-reserved) Sep 4 15:46:05.769026 kernel: devtmpfs: initialized Sep 4 15:46:05.769033 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 15:46:05.769041 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 15:46:05.769048 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 15:46:05.769057 kernel: 0 pages in range for non-PLT usage Sep 4 15:46:05.769064 kernel: 508528 pages in range for PLT usage Sep 4 15:46:05.769071 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 15:46:05.769079 kernel: SMBIOS 3.0.0 present. Sep 4 15:46:05.769086 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 4 15:46:05.769094 kernel: DMI: Memory slots populated: 1/1 Sep 4 15:46:05.769101 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 15:46:05.769110 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 15:46:05.769117 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 15:46:05.769125 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 15:46:05.769132 kernel: audit: initializing netlink subsys (disabled) Sep 4 15:46:05.769140 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 4 15:46:05.769147 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 15:46:05.769155 kernel: cpuidle: using governor menu Sep 4 15:46:05.769162 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 15:46:05.769171 kernel: ASID allocator initialised with 32768 entries Sep 4 15:46:05.769178 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 15:46:05.769185 kernel: Serial: AMBA PL011 UART driver Sep 4 15:46:05.769193 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 15:46:05.769200 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 15:46:05.769208 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 15:46:05.769215 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 15:46:05.769224 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 15:46:05.769232 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 15:46:05.769240 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 15:46:05.769379 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 15:46:05.769388 kernel: ACPI: Added _OSI(Module Device) Sep 4 15:46:05.769395 kernel: ACPI: Added _OSI(Processor Device) Sep 4 15:46:05.769403 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 15:46:05.769424 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 15:46:05.769432 kernel: ACPI: Interpreter enabled Sep 4 15:46:05.769440 kernel: ACPI: Using GIC for interrupt routing Sep 4 15:46:05.769447 kernel: ACPI: MCFG table detected, 1 entries Sep 4 15:46:05.769455 kernel: ACPI: CPU0 has been hot-added Sep 4 15:46:05.769462 kernel: ACPI: CPU1 has been hot-added Sep 4 15:46:05.769469 kernel: ACPI: CPU2 has been hot-added Sep 4 15:46:05.769477 kernel: ACPI: CPU3 has been hot-added Sep 4 15:46:05.769486 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 15:46:05.769494 kernel: printk: legacy console [ttyAMA0] enabled Sep 4 15:46:05.769501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 15:46:05.769671 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 15:46:05.769755 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 15:46:05.769833 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 15:46:05.769912 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 15:46:05.769988 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 15:46:05.769997 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 15:46:05.770005 kernel: PCI host bridge to bus 0000:00 Sep 4 15:46:05.770088 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 15:46:05.770158 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 15:46:05.770229 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 15:46:05.770297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 15:46:05.770419 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 4 15:46:05.770521 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 15:46:05.770601 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 4 15:46:05.770682 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 4 15:46:05.770762 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 15:46:05.770838 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 4 15:46:05.770917 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 4 15:46:05.770994 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 4 15:46:05.771066 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 15:46:05.771137 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 15:46:05.771206 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 15:46:05.771215 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 15:46:05.771223 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 15:46:05.771231 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 15:46:05.771238 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 15:46:05.771248 kernel: iommu: Default domain type: Translated Sep 4 15:46:05.771255 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 15:46:05.771263 kernel: efivars: Registered efivars operations Sep 4 15:46:05.771270 kernel: vgaarb: loaded Sep 4 15:46:05.771278 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 15:46:05.771285 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 15:46:05.771293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 15:46:05.771312 kernel: pnp: PnP ACPI init Sep 4 15:46:05.771402 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 15:46:05.771419 kernel: pnp: PnP ACPI: found 1 devices Sep 4 15:46:05.771427 kernel: NET: Registered PF_INET protocol family Sep 4 15:46:05.771435 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 15:46:05.771443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 15:46:05.771451 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 15:46:05.771461 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 15:46:05.771469 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 15:46:05.771477 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 15:46:05.771485 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 15:46:05.771492 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 15:46:05.771500 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 15:46:05.771507 kernel: PCI: CLS 0 bytes, default 64 Sep 4 15:46:05.771516 kernel: kvm [1]: HYP mode not available Sep 4 15:46:05.771524 kernel: Initialise system trusted keyrings Sep 4 15:46:05.771531 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 15:46:05.771539 kernel: Key type asymmetric registered Sep 4 15:46:05.771546 kernel: Asymmetric key parser 'x509' registered Sep 4 15:46:05.771554 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 4 15:46:05.771562 kernel: io scheduler mq-deadline registered Sep 4 15:46:05.771570 kernel: io scheduler kyber registered Sep 4 15:46:05.771578 kernel: io scheduler bfq registered Sep 4 15:46:05.771586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 15:46:05.771593 kernel: ACPI: button: Power Button [PWRB] Sep 4 15:46:05.771601 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 15:46:05.771686 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 15:46:05.771697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 15:46:05.771706 kernel: thunder_xcv, ver 1.0 Sep 4 15:46:05.771714 kernel: thunder_bgx, ver 1.0 Sep 4 15:46:05.771721 kernel: nicpf, ver 1.0 Sep 4 15:46:05.771729 kernel: nicvf, ver 1.0 Sep 4 15:46:05.771823 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 15:46:05.771909 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T15:46:05 UTC (1757000765) Sep 4 15:46:05.771920 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 15:46:05.771930 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 4 15:46:05.771940 kernel: watchdog: NMI not fully supported Sep 4 15:46:05.771948 kernel: watchdog: Hard watchdog permanently disabled Sep 4 15:46:05.771956 kernel: NET: Registered PF_INET6 protocol family Sep 4 15:46:05.771966 kernel: Segment Routing with IPv6 Sep 4 15:46:05.771976 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 15:46:05.771985 kernel: NET: Registered PF_PACKET protocol family Sep 4 15:46:05.771994 kernel: Key type dns_resolver registered Sep 4 15:46:05.772002 kernel: registered taskstats version 1 Sep 4 15:46:05.772009 kernel: Loading compiled-in X.509 certificates Sep 4 15:46:05.772017 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 5cbaeb2a956cf8364fe17c89324cc000891c1e4c' Sep 4 15:46:05.772025 kernel: Demotion targets for Node 0: null Sep 4 15:46:05.772032 kernel: Key type .fscrypt registered Sep 4 15:46:05.772042 kernel: Key type fscrypt-provisioning registered Sep 4 15:46:05.772051 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 15:46:05.772059 kernel: ima: Allocated hash algorithm: sha1 Sep 4 15:46:05.772067 kernel: ima: No architecture policies found Sep 4 15:46:05.772075 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 15:46:05.772084 kernel: clk: Disabling unused clocks Sep 4 15:46:05.772092 kernel: PM: genpd: Disabling unused power domains Sep 4 15:46:05.772100 kernel: Warning: unable to open an initial console. Sep 4 15:46:05.772113 kernel: Freeing unused kernel memory: 39104K Sep 4 15:46:05.772123 kernel: Run /init as init process Sep 4 15:46:05.772131 kernel: with arguments: Sep 4 15:46:05.772138 kernel: /init Sep 4 15:46:05.772146 kernel: with environment: Sep 4 15:46:05.772153 kernel: HOME=/ Sep 4 15:46:05.772160 kernel: TERM=linux Sep 4 15:46:05.772169 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 15:46:05.772178 systemd[1]: Successfully made /usr/ read-only. Sep 4 15:46:05.772188 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 15:46:05.772197 systemd[1]: Detected virtualization kvm. Sep 4 15:46:05.772204 systemd[1]: Detected architecture arm64. Sep 4 15:46:05.772212 systemd[1]: Running in initrd. Sep 4 15:46:05.772221 systemd[1]: No hostname configured, using default hostname. Sep 4 15:46:05.772230 systemd[1]: Hostname set to . Sep 4 15:46:05.772238 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 15:46:05.772246 systemd[1]: Queued start job for default target initrd.target. Sep 4 15:46:05.772254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 15:46:05.772262 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 15:46:05.772271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 15:46:05.772281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 15:46:05.772290 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 15:46:05.772298 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 15:46:05.772321 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 15:46:05.772330 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 15:46:05.772340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 15:46:05.772348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 15:46:05.772356 systemd[1]: Reached target paths.target - Path Units. Sep 4 15:46:05.772364 systemd[1]: Reached target slices.target - Slice Units. Sep 4 15:46:05.772372 systemd[1]: Reached target swap.target - Swaps. Sep 4 15:46:05.772380 systemd[1]: Reached target timers.target - Timer Units. Sep 4 15:46:05.772388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 15:46:05.772398 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 15:46:05.772411 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 15:46:05.772421 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 15:46:05.772433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 15:46:05.772441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 15:46:05.772449 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 15:46:05.772459 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 15:46:05.772467 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 15:46:05.772475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 15:46:05.772483 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 15:46:05.772492 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 15:46:05.772500 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 15:46:05.772508 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 15:46:05.772517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 15:46:05.772526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:46:05.772534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 15:46:05.772542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 15:46:05.772552 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 15:46:05.772560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 15:46:05.772588 systemd-journald[244]: Collecting audit messages is disabled. Sep 4 15:46:05.772610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 15:46:05.772619 systemd-journald[244]: Journal started Sep 4 15:46:05.772637 systemd-journald[244]: Runtime Journal (/run/log/journal/7e73dc5ca35b4b389efd55ae6c21d1b8) is 6M, max 48.5M, 42.4M free. Sep 4 15:46:05.776395 kernel: Bridge firewalling registered Sep 4 15:46:05.761916 systemd-modules-load[245]: Inserted module 'overlay' Sep 4 15:46:05.777949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:46:05.775428 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 4 15:46:05.781329 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 15:46:05.783430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 15:46:05.785141 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 15:46:05.788787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 15:46:05.790329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 15:46:05.791915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 15:46:05.806952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 15:46:05.816399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:46:05.816531 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 15:46:05.817973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 15:46:05.819258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 15:46:05.822138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 15:46:05.825359 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 15:46:05.827292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 15:46:05.847349 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa24154aac6dc1a5d38cdc5f4cdc1aea124b2960632298191d9d7d9a2320138a Sep 4 15:46:05.860787 systemd-resolved[288]: Positive Trust Anchors: Sep 4 15:46:05.860804 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 15:46:05.860807 systemd-resolved[288]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 15:46:05.860837 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 15:46:05.865615 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 4 15:46:05.866432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 15:46:05.869652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 15:46:05.915344 kernel: SCSI subsystem initialized Sep 4 15:46:05.920322 kernel: Loading iSCSI transport class v2.0-870. Sep 4 15:46:05.927344 kernel: iscsi: registered transport (tcp) Sep 4 15:46:05.945337 kernel: iscsi: registered transport (qla4xxx) Sep 4 15:46:05.945378 kernel: QLogic iSCSI HBA Driver Sep 4 15:46:05.961914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 15:46:05.978531 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 15:46:05.981701 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 15:46:06.022201 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 15:46:06.024026 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 15:46:06.087353 kernel: raid6: neonx8 gen() 15739 MB/s Sep 4 15:46:06.104335 kernel: raid6: neonx4 gen() 15729 MB/s Sep 4 15:46:06.122618 kernel: raid6: neonx2 gen() 13139 MB/s Sep 4 15:46:06.138345 kernel: raid6: neonx1 gen() 10403 MB/s Sep 4 15:46:06.155341 kernel: raid6: int64x8 gen() 6893 MB/s Sep 4 15:46:06.172335 kernel: raid6: int64x4 gen() 7330 MB/s Sep 4 15:46:06.189341 kernel: raid6: int64x2 gen() 6067 MB/s Sep 4 15:46:06.207484 kernel: raid6: int64x1 gen() 5036 MB/s Sep 4 15:46:06.207528 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s Sep 4 15:46:06.223345 kernel: raid6: .... xor() 11986 MB/s, rmw enabled Sep 4 15:46:06.223394 kernel: raid6: using neon recovery algorithm Sep 4 15:46:06.228356 kernel: xor: measuring software checksum speed Sep 4 15:46:06.228385 kernel: 8regs : 21573 MB/sec Sep 4 15:46:06.229375 kernel: 32regs : 21145 MB/sec Sep 4 15:46:06.229388 kernel: arm64_neon : 28109 MB/sec Sep 4 15:46:06.229398 kernel: xor: using function: arm64_neon (28109 MB/sec) Sep 4 15:46:06.281340 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 15:46:06.287269 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 15:46:06.289471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 15:46:06.326630 systemd-udevd[499]: Using default interface naming scheme 'v257'. Sep 4 15:46:06.331903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 15:46:06.333715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 15:46:06.359181 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 4 15:46:06.380732 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 15:46:06.384527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 15:46:06.430952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 15:46:06.433393 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 15:46:06.484343 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 15:46:06.488475 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 15:46:06.494608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 15:46:06.502450 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 15:46:06.502475 kernel: GPT:9289727 != 19775487 Sep 4 15:46:06.502486 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 15:46:06.494723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:46:06.506631 kernel: GPT:9289727 != 19775487 Sep 4 15:46:06.506650 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 15:46:06.506661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:46:06.500497 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:46:06.506071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:46:06.529368 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 15:46:06.531932 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 15:46:06.533925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:46:06.541478 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 15:46:06.543385 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 15:46:06.561612 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 15:46:06.568688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 15:46:06.569712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 15:46:06.571341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 15:46:06.573033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 15:46:06.575354 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 15:46:06.576928 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 15:46:06.591827 disk-uuid[592]: Primary Header is updated. Sep 4 15:46:06.591827 disk-uuid[592]: Secondary Entries is updated. Sep 4 15:46:06.591827 disk-uuid[592]: Secondary Header is updated. Sep 4 15:46:06.595319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:46:06.595551 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 15:46:07.604357 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 15:46:07.605334 disk-uuid[595]: The operation has completed successfully. Sep 4 15:46:07.632175 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 15:46:07.633098 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 15:46:07.670579 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 15:46:07.693331 sh[611]: Success Sep 4 15:46:07.705804 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 15:46:07.705841 kernel: device-mapper: uevent: version 1.0.3 Sep 4 15:46:07.706756 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 15:46:07.713346 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 4 15:46:07.736131 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 15:46:07.739892 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 15:46:07.755615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 15:46:07.760644 kernel: BTRFS: device fsid d6826f11-765e-43ab-9425-5cf9fd7ef603 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (623) Sep 4 15:46:07.760680 kernel: BTRFS info (device dm-0): first mount of filesystem d6826f11-765e-43ab-9425-5cf9fd7ef603 Sep 4 15:46:07.760691 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:46:07.765320 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 15:46:07.765356 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 15:46:07.766235 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 15:46:07.767415 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 15:46:07.768408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 15:46:07.769179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 15:46:07.771930 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 15:46:07.797562 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 4 15:46:07.799496 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:46:07.799519 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:46:07.801884 kernel: BTRFS info (device vda6): turning on async discard Sep 4 15:46:07.801919 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 15:46:07.806316 kernel: BTRFS info (device vda6): last unmount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:46:07.806431 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 15:46:07.808588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 15:46:07.877375 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 15:46:07.881058 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 15:46:07.915422 systemd-networkd[800]: lo: Link UP Sep 4 15:46:07.916101 systemd-networkd[800]: lo: Gained carrier Sep 4 15:46:07.916890 systemd-networkd[800]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:46:07.918379 ignition[697]: Ignition 2.22.0 Sep 4 15:46:07.916893 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 15:46:07.918385 ignition[697]: Stage: fetch-offline Sep 4 15:46:07.917072 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 15:46:07.918425 ignition[697]: no configs at "/usr/lib/ignition/base.d" Sep 4 15:46:07.917738 systemd-networkd[800]: eth0: Link UP Sep 4 15:46:07.918432 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:46:07.918330 systemd[1]: Reached target network.target - Network. Sep 4 15:46:07.918521 ignition[697]: parsed url from cmdline: "" Sep 4 15:46:07.919685 systemd-networkd[800]: eth0: Gained carrier Sep 4 15:46:07.918524 ignition[697]: no config URL provided Sep 4 15:46:07.919695 systemd-networkd[800]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:46:07.918529 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 15:46:07.918535 ignition[697]: no config at "/usr/lib/ignition/user.ign" Sep 4 15:46:07.918553 ignition[697]: op(1): [started] loading QEMU firmware config module Sep 4 15:46:07.918557 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 15:46:07.923772 ignition[697]: op(1): [finished] loading QEMU firmware config module Sep 4 15:46:07.938062 ignition[697]: parsing config with SHA512: ad5ea5f98bf0d1922958a239af5dbbe6c167ae62cde8ae41d20b4e6d1647fef24c5cf199ae35a03216f22259ec08c1d145ff03f66e68a2721e99fe187d9da27d Sep 4 15:46:07.942389 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 15:46:07.943284 ignition[697]: fetch-offline: fetch-offline passed Sep 4 15:46:07.943003 unknown[697]: fetched base config from "system" Sep 4 15:46:07.943365 ignition[697]: Ignition finished successfully Sep 4 15:46:07.943010 unknown[697]: fetched user config from "qemu" Sep 4 15:46:07.946379 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 15:46:07.947385 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 15:46:07.948242 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 15:46:07.973602 ignition[812]: Ignition 2.22.0 Sep 4 15:46:07.973618 ignition[812]: Stage: kargs Sep 4 15:46:07.973753 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 4 15:46:07.973762 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:46:07.974358 ignition[812]: kargs: kargs passed Sep 4 15:46:07.977943 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 15:46:07.974397 ignition[812]: Ignition finished successfully Sep 4 15:46:07.979822 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 15:46:08.004654 ignition[820]: Ignition 2.22.0 Sep 4 15:46:08.004671 ignition[820]: Stage: disks Sep 4 15:46:08.004801 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 4 15:46:08.004810 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:46:08.007363 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 15:46:08.005368 ignition[820]: disks: disks passed Sep 4 15:46:08.008810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 15:46:08.005422 ignition[820]: Ignition finished successfully Sep 4 15:46:08.010141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 15:46:08.011398 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 15:46:08.012805 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 15:46:08.014100 systemd[1]: Reached target basic.target - Basic System. Sep 4 15:46:08.016481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 15:46:08.050077 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 15:46:08.119199 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 15:46:08.127186 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 15:46:08.198327 kernel: EXT4-fs (vda9): mounted filesystem 1afcf1f8-650a-49cc-971e-a57f02cf6533 r/w with ordered data mode. Quota mode: none. Sep 4 15:46:08.200694 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 15:46:08.203802 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 15:46:08.209574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 15:46:08.228776 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 15:46:08.229779 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 15:46:08.229818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 15:46:08.239068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 4 15:46:08.239090 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:46:08.239100 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:46:08.229989 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 15:46:08.241058 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 15:46:08.242188 kernel: BTRFS info (device vda6): turning on async discard Sep 4 15:46:08.242205 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 15:46:08.245041 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 15:46:08.248112 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 15:46:08.304155 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 15:46:08.307752 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 4 15:46:08.311033 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 15:46:08.315644 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 15:46:08.388272 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 15:46:08.390166 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 15:46:08.391659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 15:46:08.403358 kernel: BTRFS info (device vda6): last unmount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:46:08.419455 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 15:46:08.435780 ignition[953]: INFO : Ignition 2.22.0 Sep 4 15:46:08.435780 ignition[953]: INFO : Stage: mount Sep 4 15:46:08.437071 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 15:46:08.437071 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:46:08.437071 ignition[953]: INFO : mount: mount passed Sep 4 15:46:08.437071 ignition[953]: INFO : Ignition finished successfully Sep 4 15:46:08.438236 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 15:46:08.440238 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 15:46:08.759483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 15:46:08.760977 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 15:46:08.787468 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 4 15:46:08.787501 kernel: BTRFS info (device vda6): first mount of filesystem 7ad7f3a7-2940-40b1-9356-75c56294c96d Sep 4 15:46:08.787513 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 15:46:08.790374 kernel: BTRFS info (device vda6): turning on async discard Sep 4 15:46:08.790391 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 15:46:08.791860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 15:46:08.830123 ignition[983]: INFO : Ignition 2.22.0 Sep 4 15:46:08.830123 ignition[983]: INFO : Stage: files Sep 4 15:46:08.831492 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 15:46:08.831492 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:46:08.831492 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 4 15:46:08.834228 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 15:46:08.834228 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 15:46:08.834228 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 15:46:08.834228 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 15:46:08.834228 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 15:46:08.833879 unknown[983]: wrote ssh authorized keys file for user: core Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 15:46:08.840765 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 15:46:09.436139 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 4 15:46:09.658851 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 15:46:09.658851 ignition[983]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 4 15:46:09.662110 ignition[983]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 15:46:09.662110 ignition[983]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 15:46:09.662110 ignition[983]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 4 15:46:09.662110 ignition[983]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 15:46:09.669486 systemd-networkd[800]: eth0: Gained IPv6LL Sep 4 15:46:09.676169 ignition[983]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 15:46:09.679702 ignition[983]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 15:46:09.682034 ignition[983]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 15:46:09.682034 ignition[983]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 15:46:09.682034 ignition[983]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 15:46:09.682034 ignition[983]: INFO : files: files passed Sep 4 15:46:09.682034 ignition[983]: INFO : Ignition finished successfully Sep 4 15:46:09.683358 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 15:46:09.686195 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 15:46:09.687931 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 15:46:09.708214 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 15:46:09.708322 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 15:46:09.710959 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 15:46:09.712339 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 15:46:09.712339 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 15:46:09.714700 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 15:46:09.714202 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 15:46:09.716054 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 15:46:09.717881 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 15:46:09.751153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 15:46:09.751269 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 15:46:09.754624 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 15:46:09.756006 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 15:46:09.757482 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 15:46:09.758265 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 15:46:09.793673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 15:46:09.796031 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 15:46:09.827126 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 15:46:09.828188 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 15:46:09.829838 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 15:46:09.831159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 15:46:09.831272 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 15:46:09.833210 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 15:46:09.834951 systemd[1]: Stopped target basic.target - Basic System. Sep 4 15:46:09.836193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 15:46:09.837581 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 15:46:09.839143 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 15:46:09.840706 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 15:46:09.842273 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 15:46:09.843788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 15:46:09.845241 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 15:46:09.847086 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 15:46:09.848427 systemd[1]: Stopped target swap.target - Swaps. Sep 4 15:46:09.849595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 15:46:09.849702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 15:46:09.851556 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 15:46:09.853090 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 15:46:09.854636 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 15:46:09.854735 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 15:46:09.856280 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 15:46:09.856417 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 15:46:09.858885 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 15:46:09.859002 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 15:46:09.860538 systemd[1]: Stopped target paths.target - Path Units. Sep 4 15:46:09.861841 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 15:46:09.861949 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 15:46:09.863481 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 15:46:09.864904 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 15:46:09.866119 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 15:46:09.866205 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 15:46:09.867625 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 15:46:09.867699 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 15:46:09.869336 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 15:46:09.869467 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 15:46:09.870912 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 15:46:09.871009 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 15:46:09.873047 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 15:46:09.874777 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 15:46:09.876069 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 15:46:09.876179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 15:46:09.877892 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 15:46:09.877987 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 15:46:09.879565 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 15:46:09.879665 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 15:46:09.884360 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 15:46:09.885448 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 15:46:09.893108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 15:46:09.901295 ignition[1038]: INFO : Ignition 2.22.0 Sep 4 15:46:09.901295 ignition[1038]: INFO : Stage: umount Sep 4 15:46:09.903039 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 15:46:09.903039 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 15:46:09.903039 ignition[1038]: INFO : umount: umount passed Sep 4 15:46:09.903039 ignition[1038]: INFO : Ignition finished successfully Sep 4 15:46:09.905902 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 15:46:09.905996 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 15:46:09.907425 systemd[1]: Stopped target network.target - Network. Sep 4 15:46:09.908552 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 15:46:09.908604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 15:46:09.910046 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 15:46:09.910084 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 15:46:09.911250 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 15:46:09.911291 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 15:46:09.912708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 15:46:09.912747 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 15:46:09.914268 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 15:46:09.915772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 15:46:09.924212 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 15:46:09.924386 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 15:46:09.927645 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 15:46:09.927743 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 15:46:09.931480 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 15:46:09.933196 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 15:46:09.933242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 15:46:09.935693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 15:46:09.936557 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 15:46:09.936605 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 15:46:09.938380 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 15:46:09.938429 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:46:09.939954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 15:46:09.939993 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 15:46:09.943042 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 15:46:09.948441 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 15:46:09.948534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 15:46:09.950848 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 15:46:09.950947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 15:46:09.955191 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 15:46:09.964432 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 15:46:09.965701 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 15:46:09.965779 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 15:46:09.967609 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 15:46:09.967665 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 15:46:09.968635 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 15:46:09.968663 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 15:46:09.970036 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 15:46:09.970075 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 15:46:09.972197 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 15:46:09.972245 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 15:46:09.974536 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 15:46:09.974582 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 15:46:09.977575 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 15:46:09.979132 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 15:46:09.979180 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 15:46:09.980794 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 15:46:09.980834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 15:46:09.982533 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 15:46:09.982570 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 15:46:09.984237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 15:46:09.984273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 15:46:09.985905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 15:46:09.985942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:46:09.998030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 15:46:09.998157 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 15:46:10.000766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 15:46:10.002809 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 15:46:10.023976 systemd[1]: Switching root. Sep 4 15:46:10.045261 systemd-journald[244]: Journal stopped Sep 4 15:46:10.754112 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 4 15:46:10.754161 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 15:46:10.754179 kernel: SELinux: policy capability open_perms=1 Sep 4 15:46:10.754191 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 15:46:10.754202 kernel: SELinux: policy capability always_check_network=0 Sep 4 15:46:10.754215 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 15:46:10.754226 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 15:46:10.754236 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 15:46:10.754246 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 15:46:10.754256 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 15:46:10.754265 kernel: audit: type=1403 audit(1757000770.194:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 15:46:10.754282 systemd[1]: Successfully loaded SELinux policy in 43.504ms. Sep 4 15:46:10.754318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.054ms. Sep 4 15:46:10.754331 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 15:46:10.754342 systemd[1]: Detected virtualization kvm. Sep 4 15:46:10.754355 systemd[1]: Detected architecture arm64. Sep 4 15:46:10.754365 systemd[1]: Detected first boot. Sep 4 15:46:10.754377 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 15:46:10.754387 zram_generator::config[1084]: No configuration found. Sep 4 15:46:10.754407 kernel: NET: Registered PF_VSOCK protocol family Sep 4 15:46:10.754420 systemd[1]: Populated /etc with preset unit settings. Sep 4 15:46:10.754431 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 15:46:10.754443 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 15:46:10.754454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 15:46:10.754465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 15:46:10.754477 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 15:46:10.754487 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 15:46:10.754497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 15:46:10.754508 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 15:46:10.754522 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 15:46:10.754533 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 15:46:10.754543 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 15:46:10.754555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 15:46:10.754566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 15:46:10.754579 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 15:46:10.754590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 15:46:10.754602 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 15:46:10.754613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 15:46:10.754624 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 15:46:10.754634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 15:46:10.754645 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 15:46:10.754656 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 15:46:10.754667 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 15:46:10.754678 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 15:46:10.754691 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 15:46:10.754702 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 15:46:10.754713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 15:46:10.754724 systemd[1]: Reached target slices.target - Slice Units. Sep 4 15:46:10.754735 systemd[1]: Reached target swap.target - Swaps. Sep 4 15:46:10.754747 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 15:46:10.754758 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 15:46:10.754769 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 15:46:10.754780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 15:46:10.754791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 15:46:10.754802 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 15:46:10.754813 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 15:46:10.754823 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 15:46:10.754836 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 15:46:10.754847 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 15:46:10.754858 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 15:46:10.754868 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 15:46:10.754879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 15:46:10.754890 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 15:46:10.755012 systemd[1]: Reached target machines.target - Containers. Sep 4 15:46:10.755026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 15:46:10.755038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:46:10.755050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 15:46:10.755061 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 15:46:10.755072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 15:46:10.755082 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 15:46:10.755095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 15:46:10.755106 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 15:46:10.755117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 15:46:10.755128 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 15:46:10.755138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 15:46:10.755149 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 15:46:10.755159 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 15:46:10.755171 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 15:46:10.755182 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:46:10.755192 kernel: fuse: init (API version 7.41) Sep 4 15:46:10.755204 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 15:46:10.755215 kernel: ACPI: bus type drm_connector registered Sep 4 15:46:10.755225 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 15:46:10.755236 kernel: loop: module loaded Sep 4 15:46:10.755247 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 15:46:10.755259 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 15:46:10.755270 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 15:46:10.755281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 15:46:10.755336 systemd-journald[1159]: Collecting audit messages is disabled. Sep 4 15:46:10.755361 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 15:46:10.755375 systemd[1]: Stopped verity-setup.service. Sep 4 15:46:10.755387 systemd-journald[1159]: Journal started Sep 4 15:46:10.755413 systemd-journald[1159]: Runtime Journal (/run/log/journal/7e73dc5ca35b4b389efd55ae6c21d1b8) is 6M, max 48.5M, 42.4M free. Sep 4 15:46:10.554005 systemd[1]: Queued start job for default target multi-user.target. Sep 4 15:46:10.580406 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 15:46:10.580844 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 15:46:10.758547 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 15:46:10.759523 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 15:46:10.760434 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 15:46:10.761454 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 15:46:10.762263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 15:46:10.763296 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 15:46:10.764222 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 15:46:10.766415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 15:46:10.767687 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 15:46:10.768897 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 15:46:10.769068 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 15:46:10.770268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 15:46:10.770494 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 15:46:10.771564 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 15:46:10.771718 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 15:46:10.772879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 15:46:10.773063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 15:46:10.774320 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 15:46:10.774527 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 15:46:10.775585 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 15:46:10.775735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 15:46:10.776985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 15:46:10.779394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 15:46:10.782366 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 15:46:10.784449 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 15:46:10.796644 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 15:46:10.797974 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Sep 4 15:46:10.800156 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 15:46:10.802085 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 15:46:10.803058 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 15:46:10.803095 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 15:46:10.804800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 15:46:10.805963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:46:10.815091 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 15:46:10.816942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 15:46:10.818010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 15:46:10.819083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 15:46:10.820100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 15:46:10.822558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 15:46:10.824527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 15:46:10.826271 systemd-journald[1159]: Time spent on flushing to /var/log/journal/7e73dc5ca35b4b389efd55ae6c21d1b8 is 15.194ms for 865 entries. Sep 4 15:46:10.826271 systemd-journald[1159]: System Journal (/var/log/journal/7e73dc5ca35b4b389efd55ae6c21d1b8) is 8M, max 195.6M, 187.6M free. Sep 4 15:46:10.856997 systemd-journald[1159]: Received client request to flush runtime journal. Sep 4 15:46:10.857046 kernel: loop0: detected capacity change from 0 to 207008 Sep 4 15:46:10.857067 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 15:46:10.827527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 15:46:10.829977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 15:46:10.831196 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 15:46:10.832464 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 15:46:10.834197 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 15:46:10.837689 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 15:46:10.841520 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 15:46:10.853118 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 4 15:46:10.853128 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 4 15:46:10.855636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 15:46:10.859785 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 15:46:10.861347 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 15:46:10.867471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 15:46:10.874325 kernel: loop1: detected capacity change from 0 to 100608 Sep 4 15:46:10.876535 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 15:46:10.894047 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 15:46:10.896669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 15:46:10.898299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 15:46:10.902328 kernel: loop2: detected capacity change from 0 to 119320 Sep 4 15:46:10.920589 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 15:46:10.927990 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 4 15:46:10.928016 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 4 15:46:10.931341 kernel: loop3: detected capacity change from 0 to 207008 Sep 4 15:46:10.935848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 15:46:10.940336 kernel: loop4: detected capacity change from 0 to 100608 Sep 4 15:46:10.946333 kernel: loop5: detected capacity change from 0 to 119320 Sep 4 15:46:10.949231 (sd-merge)[1227]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Sep 4 15:46:10.952104 (sd-merge)[1227]: Merged extensions into '/usr'. Sep 4 15:46:10.955510 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 15:46:10.955621 systemd[1]: Reloading... Sep 4 15:46:11.007537 zram_generator::config[1257]: No configuration found. Sep 4 15:46:11.024914 systemd-resolved[1222]: Positive Trust Anchors: Sep 4 15:46:11.025200 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 15:46:11.025255 systemd-resolved[1222]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 15:46:11.025342 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 15:46:11.031714 systemd-resolved[1222]: Defaulting to hostname 'linux'. Sep 4 15:46:11.152955 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 15:46:11.153542 systemd[1]: Reloading finished in 197 ms. Sep 4 15:46:11.174075 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 15:46:11.175242 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 15:46:11.177321 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 15:46:11.180077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 15:46:11.189448 systemd[1]: Starting ensure-sysext.service... Sep 4 15:46:11.191110 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 15:46:11.200047 systemd[1]: Reload requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Sep 4 15:46:11.200065 systemd[1]: Reloading... Sep 4 15:46:11.207122 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 15:46:11.207437 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 15:46:11.207710 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 15:46:11.207908 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 15:46:11.209538 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 15:46:11.209743 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Sep 4 15:46:11.209789 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Sep 4 15:46:11.227460 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 15:46:11.227471 systemd-tmpfiles[1295]: Skipping /boot Sep 4 15:46:11.235536 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 15:46:11.235551 systemd-tmpfiles[1295]: Skipping /boot Sep 4 15:46:11.254538 zram_generator::config[1325]: No configuration found. Sep 4 15:46:11.383346 systemd[1]: Reloading finished in 182 ms. Sep 4 15:46:11.399072 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 15:46:11.410918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 15:46:11.419492 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 15:46:11.421598 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 15:46:11.440897 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 15:46:11.446662 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 15:46:11.449242 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 15:46:11.451941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 15:46:11.455755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:46:11.458731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 15:46:11.460561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 15:46:11.462606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 15:46:11.463657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:46:11.463766 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:46:11.468894 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 15:46:11.471118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:46:11.471372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:46:11.471472 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:46:11.478289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 15:46:11.481523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 15:46:11.482555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 15:46:11.482630 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 15:46:11.484441 systemd-udevd[1371]: Using default interface naming scheme 'v257'. Sep 4 15:46:11.486891 systemd[1]: Finished ensure-sysext.service. Sep 4 15:46:11.488706 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 15:46:11.490365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 15:46:11.490582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 15:46:11.492107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 15:46:11.492253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 15:46:11.493783 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 15:46:11.494187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 15:46:11.495926 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 15:46:11.496117 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 15:46:11.502270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 15:46:11.502381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 15:46:11.504286 augenrules[1395]: No rules Sep 4 15:46:11.505507 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 15:46:11.506757 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 15:46:11.506957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 15:46:11.510728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 15:46:11.520358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 15:46:11.540271 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 15:46:11.541942 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 15:46:11.570504 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 15:46:11.642611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 15:46:11.645734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 15:46:11.669422 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 15:46:11.694290 systemd-networkd[1427]: lo: Link UP Sep 4 15:46:11.694299 systemd-networkd[1427]: lo: Gained carrier Sep 4 15:46:11.695150 systemd-networkd[1427]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:46:11.695159 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 15:46:11.695210 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 15:46:11.696413 systemd-networkd[1427]: eth0: Link UP Sep 4 15:46:11.696415 systemd[1]: Reached target network.target - Network. Sep 4 15:46:11.696548 systemd-networkd[1427]: eth0: Gained carrier Sep 4 15:46:11.696567 systemd-networkd[1427]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 15:46:11.699201 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 15:46:11.701462 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 15:46:11.709382 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 15:46:11.716879 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 15:46:11.718463 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 15:46:11.718699 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 15:46:11.718843 systemd-timesyncd[1402]: Initial clock synchronization to Thu 2025-09-04 15:46:11.680558 UTC. Sep 4 15:46:11.720379 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 15:46:11.759337 ldconfig[1363]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 15:46:11.767373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 15:46:11.770522 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 15:46:11.784358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 15:46:11.790409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 15:46:11.820341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 15:46:11.822519 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 15:46:11.823413 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 15:46:11.824324 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 15:46:11.825409 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 15:46:11.826260 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 15:46:11.827309 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 15:46:11.828192 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 15:46:11.828222 systemd[1]: Reached target paths.target - Path Units. Sep 4 15:46:11.829135 systemd[1]: Reached target timers.target - Timer Units. Sep 4 15:46:11.830590 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 15:46:11.832606 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 15:46:11.835229 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 15:46:11.836473 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 15:46:11.837407 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 15:46:11.841090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 15:46:11.842215 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 15:46:11.843717 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 15:46:11.844609 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 15:46:11.845325 systemd[1]: Reached target basic.target - Basic System. Sep 4 15:46:11.846041 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 15:46:11.846071 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 15:46:11.846960 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 15:46:11.848690 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 15:46:11.850281 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 15:46:11.851959 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 15:46:11.855568 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 15:46:11.856520 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 15:46:11.857603 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 15:46:11.860340 jq[1483]: false Sep 4 15:46:11.861462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 15:46:11.863598 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 15:46:11.866607 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 15:46:11.867354 extend-filesystems[1484]: Found /dev/vda6 Sep 4 15:46:11.867573 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 15:46:11.867961 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 15:46:11.868533 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 15:46:11.871455 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 15:46:11.872610 extend-filesystems[1484]: Found /dev/vda9 Sep 4 15:46:11.875066 extend-filesystems[1484]: Checking size of /dev/vda9 Sep 4 15:46:11.875060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 15:46:11.876536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 15:46:11.876707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 15:46:11.876932 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 15:46:11.877069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 15:46:11.879785 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 15:46:11.879964 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 15:46:11.884343 jq[1499]: true Sep 4 15:46:11.892312 update_engine[1495]: I20250904 15:46:11.892036 1495 main.cc:92] Flatcar Update Engine starting Sep 4 15:46:11.893795 extend-filesystems[1484]: Resized partition /dev/vda9 Sep 4 15:46:11.895012 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 15:46:11.895914 extend-filesystems[1521]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 15:46:11.900962 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 15:46:11.902909 jq[1513]: true Sep 4 15:46:11.931334 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 15:46:11.935658 dbus-daemon[1481]: [system] SELinux support is enabled Sep 4 15:46:11.935856 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 15:46:11.943353 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 15:46:11.943353 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 15:46:11.943353 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 15:46:11.949884 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Sep 4 15:46:11.945016 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 15:46:11.946369 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 15:46:11.951216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 15:46:11.951239 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 15:46:11.953380 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 15:46:11.953419 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 15:46:11.955358 update_engine[1495]: I20250904 15:46:11.955251 1495 update_check_scheduler.cc:74] Next update check in 11m15s Sep 4 15:46:11.955684 systemd[1]: Started update-engine.service - Update Engine. Sep 4 15:46:11.957355 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 15:46:11.958026 systemd-logind[1493]: New seat seat0. Sep 4 15:46:11.959245 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Sep 4 15:46:11.961287 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 15:46:11.963185 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 15:46:11.966479 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 15:46:11.968851 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 15:46:12.006078 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 15:46:12.075562 containerd[1507]: time="2025-09-04T15:46:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 15:46:12.076433 containerd[1507]: time="2025-09-04T15:46:12.076398343Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 15:46:12.084333 containerd[1507]: time="2025-09-04T15:46:12.084248979Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.133µs" Sep 4 15:46:12.084333 containerd[1507]: time="2025-09-04T15:46:12.084293952Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 15:46:12.084333 containerd[1507]: time="2025-09-04T15:46:12.084331263Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 15:46:12.084530 containerd[1507]: time="2025-09-04T15:46:12.084500103Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 15:46:12.084530 containerd[1507]: time="2025-09-04T15:46:12.084523767Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 15:46:12.084585 containerd[1507]: time="2025-09-04T15:46:12.084551182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084616 containerd[1507]: time="2025-09-04T15:46:12.084598670Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084641 containerd[1507]: time="2025-09-04T15:46:12.084615151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084853 containerd[1507]: time="2025-09-04T15:46:12.084826051Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084853 containerd[1507]: time="2025-09-04T15:46:12.084845485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084894 containerd[1507]: time="2025-09-04T15:46:12.084857337Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084894 containerd[1507]: time="2025-09-04T15:46:12.084866355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 15:46:12.084951 containerd[1507]: time="2025-09-04T15:46:12.084936389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 15:46:12.085148 containerd[1507]: time="2025-09-04T15:46:12.085121950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 15:46:12.085170 containerd[1507]: time="2025-09-04T15:46:12.085157066Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 15:46:12.085170 containerd[1507]: time="2025-09-04T15:46:12.085167083Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 15:46:12.085241 containerd[1507]: time="2025-09-04T15:46:12.085217403Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 15:46:12.085531 containerd[1507]: time="2025-09-04T15:46:12.085501849Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 15:46:12.085620 containerd[1507]: time="2025-09-04T15:46:12.085602730Z" level=info msg="metadata content store policy set" policy=shared Sep 4 15:46:12.110039 containerd[1507]: time="2025-09-04T15:46:12.109986105Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 15:46:12.111097 containerd[1507]: time="2025-09-04T15:46:12.110287471Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 15:46:12.111097 containerd[1507]: time="2025-09-04T15:46:12.111098868Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 15:46:12.111176 containerd[1507]: time="2025-09-04T15:46:12.111118661Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 15:46:12.111176 containerd[1507]: time="2025-09-04T15:46:12.111143761Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 15:46:12.111176 containerd[1507]: time="2025-09-04T15:46:12.111157848Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 15:46:12.111176 containerd[1507]: time="2025-09-04T15:46:12.111175406Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 15:46:12.111262 containerd[1507]: time="2025-09-04T15:46:12.111191848Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 15:46:12.111262 containerd[1507]: time="2025-09-04T15:46:12.111208129Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 15:46:12.111262 containerd[1507]: time="2025-09-04T15:46:12.111220140Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 15:46:12.111262 containerd[1507]: time="2025-09-04T15:46:12.111232830Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 15:46:12.111346 containerd[1507]: time="2025-09-04T15:46:12.111264396Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111532001Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111577653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111594214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111604709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111615045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111625220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111637272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111647208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111657863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111669037Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 15:46:12.111674 containerd[1507]: time="2025-09-04T15:46:12.111679372Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 15:46:12.111871 containerd[1507]: time="2025-09-04T15:46:12.111864454Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 15:46:12.111895 containerd[1507]: time="2025-09-04T15:46:12.111879857Z" level=info msg="Start snapshots syncer" Sep 4 15:46:12.111926 containerd[1507]: time="2025-09-04T15:46:12.111903880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 15:46:12.112153 containerd[1507]: time="2025-09-04T15:46:12.112099696Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 15:46:12.112255 containerd[1507]: time="2025-09-04T15:46:12.112166019Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 15:46:12.112255 containerd[1507]: time="2025-09-04T15:46:12.112238447Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 15:46:12.112427 containerd[1507]: time="2025-09-04T15:46:12.112361396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 15:46:12.112427 containerd[1507]: time="2025-09-04T15:46:12.112391006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 15:46:12.112427 containerd[1507]: time="2025-09-04T15:46:12.112403815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 15:46:12.112427 containerd[1507]: time="2025-09-04T15:46:12.112414949Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 15:46:12.112427 containerd[1507]: time="2025-09-04T15:46:12.112428437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 15:46:12.112514 containerd[1507]: time="2025-09-04T15:46:12.112438493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 15:46:12.112514 containerd[1507]: time="2025-09-04T15:46:12.112449707Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 15:46:12.112514 containerd[1507]: time="2025-09-04T15:46:12.112472932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 15:46:12.112514 containerd[1507]: time="2025-09-04T15:46:12.112484584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 15:46:12.112514 containerd[1507]: time="2025-09-04T15:46:12.112499349Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112537778Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112555376Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112564076Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112573533Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112580995Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112590293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 15:46:12.112601 containerd[1507]: time="2025-09-04T15:46:12.112600270Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 15:46:12.112711 containerd[1507]: time="2025-09-04T15:46:12.112676130Z" level=info msg="runtime interface created" Sep 4 15:46:12.112711 containerd[1507]: time="2025-09-04T15:46:12.112681836Z" level=info msg="created NRI interface" Sep 4 15:46:12.112711 containerd[1507]: time="2025-09-04T15:46:12.112689778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 15:46:12.112711 containerd[1507]: time="2025-09-04T15:46:12.112700153Z" level=info msg="Connect containerd service" Sep 4 15:46:12.112774 containerd[1507]: time="2025-09-04T15:46:12.112723777Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 15:46:12.113417 containerd[1507]: time="2025-09-04T15:46:12.113385210Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 15:46:12.180735 containerd[1507]: time="2025-09-04T15:46:12.180659259Z" level=info msg="Start subscribing containerd event" Sep 4 15:46:12.180817 containerd[1507]: time="2025-09-04T15:46:12.180743699Z" level=info msg="Start recovering state" Sep 4 15:46:12.180836 containerd[1507]: time="2025-09-04T15:46:12.180828817Z" level=info msg="Start event monitor" Sep 4 15:46:12.180903 containerd[1507]: time="2025-09-04T15:46:12.180842425Z" level=info msg="Start cni network conf syncer for default" Sep 4 15:46:12.180903 containerd[1507]: time="2025-09-04T15:46:12.180851683Z" level=info msg="Start streaming server" Sep 4 15:46:12.180903 containerd[1507]: time="2025-09-04T15:46:12.180860582Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 15:46:12.180903 containerd[1507]: time="2025-09-04T15:46:12.180867525Z" level=info msg="runtime interface starting up..." Sep 4 15:46:12.180903 containerd[1507]: time="2025-09-04T15:46:12.180873112Z" level=info msg="starting plugins..." Sep 4 15:46:12.180903 containerd[1507]: time="2025-09-04T15:46:12.180884445Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 15:46:12.181029 containerd[1507]: time="2025-09-04T15:46:12.180992789Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 15:46:12.181029 containerd[1507]: time="2025-09-04T15:46:12.181079782Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 15:46:12.181029 containerd[1507]: time="2025-09-04T15:46:12.181175156Z" level=info msg="containerd successfully booted in 0.105972s" Sep 4 15:46:12.181281 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 15:46:12.490087 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 15:46:12.510437 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 15:46:12.513207 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 15:46:12.531857 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 15:46:12.532056 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 15:46:12.534610 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 15:46:12.558392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 15:46:12.560896 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 15:46:12.564593 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 15:46:12.565744 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 15:46:13.060416 systemd-networkd[1427]: eth0: Gained IPv6LL Sep 4 15:46:13.064411 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 15:46:13.065887 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 15:46:13.068098 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 15:46:13.070594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:46:13.094661 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 15:46:13.110666 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 15:46:13.110927 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 15:46:13.112454 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 15:46:13.113644 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 15:46:13.644611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:46:13.646058 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 15:46:13.647955 systemd[1]: Startup finished in 1.991s (kernel) + 4.601s (initrd) + 3.497s (userspace) = 10.090s. Sep 4 15:46:13.666642 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 15:46:14.014665 kubelet[1612]: E0904 15:46:14.014586 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 15:46:14.016941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 15:46:14.017074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 15:46:14.017405 systemd[1]: kubelet.service: Consumed 741ms CPU time, 256.8M memory peak. Sep 4 15:46:18.702470 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 15:46:18.703452 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:49232.service - OpenSSH per-connection server daemon (10.0.0.1:49232). Sep 4 15:46:18.782561 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 49232 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:18.784257 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:18.790030 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 15:46:18.790936 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 15:46:18.795605 systemd-logind[1493]: New session 1 of user core. Sep 4 15:46:18.809254 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 15:46:18.811569 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 15:46:18.828193 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 15:46:18.830381 systemd-logind[1493]: New session c1 of user core. Sep 4 15:46:18.924142 systemd[1631]: Queued start job for default target default.target. Sep 4 15:46:18.943251 systemd[1631]: Created slice app.slice - User Application Slice. Sep 4 15:46:18.943282 systemd[1631]: Reached target paths.target - Paths. Sep 4 15:46:18.943340 systemd[1631]: Reached target timers.target - Timers. Sep 4 15:46:18.944815 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 15:46:18.954531 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 15:46:18.954632 systemd[1631]: Reached target sockets.target - Sockets. Sep 4 15:46:18.954682 systemd[1631]: Reached target basic.target - Basic System. Sep 4 15:46:18.954712 systemd[1631]: Reached target default.target - Main User Target. Sep 4 15:46:18.954738 systemd[1631]: Startup finished in 118ms. Sep 4 15:46:18.954854 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 15:46:18.963464 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 15:46:19.032557 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:49236.service - OpenSSH per-connection server daemon (10.0.0.1:49236). Sep 4 15:46:19.086710 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 49236 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:19.088055 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:19.092162 systemd-logind[1493]: New session 2 of user core. Sep 4 15:46:19.100456 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 15:46:19.151096 sshd[1645]: Connection closed by 10.0.0.1 port 49236 Sep 4 15:46:19.151593 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:19.162181 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:49236.service: Deactivated successfully. Sep 4 15:46:19.165556 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 15:46:19.166573 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Sep 4 15:46:19.168656 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). Sep 4 15:46:19.169673 systemd-logind[1493]: Removed session 2. Sep 4 15:46:19.229242 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:19.230625 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:19.235679 systemd-logind[1493]: New session 3 of user core. Sep 4 15:46:19.255497 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 15:46:19.304484 sshd[1654]: Connection closed by 10.0.0.1 port 49240 Sep 4 15:46:19.304812 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:19.315128 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:49240.service: Deactivated successfully. Sep 4 15:46:19.317510 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 15:46:19.318226 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Sep 4 15:46:19.320332 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:49254.service - OpenSSH per-connection server daemon (10.0.0.1:49254). Sep 4 15:46:19.320959 systemd-logind[1493]: Removed session 3. Sep 4 15:46:19.362657 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 49254 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:19.363497 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:19.367507 systemd-logind[1493]: New session 4 of user core. Sep 4 15:46:19.387475 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 15:46:19.440072 sshd[1663]: Connection closed by 10.0.0.1 port 49254 Sep 4 15:46:19.440437 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:19.454258 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:49254.service: Deactivated successfully. Sep 4 15:46:19.455620 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 15:46:19.456315 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Sep 4 15:46:19.460502 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:49264.service - OpenSSH per-connection server daemon (10.0.0.1:49264). Sep 4 15:46:19.461099 systemd-logind[1493]: Removed session 4. Sep 4 15:46:19.514695 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 49264 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:19.515791 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:19.520142 systemd-logind[1493]: New session 5 of user core. Sep 4 15:46:19.531462 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 15:46:19.587795 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 15:46:19.588376 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:46:19.602090 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 4 15:46:19.603803 sshd[1672]: Connection closed by 10.0.0.1 port 49264 Sep 4 15:46:19.604178 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:19.613231 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:49264.service: Deactivated successfully. Sep 4 15:46:19.614811 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 15:46:19.616926 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Sep 4 15:46:19.619532 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:49274.service - OpenSSH per-connection server daemon (10.0.0.1:49274). Sep 4 15:46:19.620146 systemd-logind[1493]: Removed session 5. Sep 4 15:46:19.672102 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 49274 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:19.673456 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:19.678146 systemd-logind[1493]: New session 6 of user core. Sep 4 15:46:19.683471 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 15:46:19.735274 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 15:46:19.735571 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:46:19.813014 sudo[1684]: pam_unix(sudo:session): session closed for user root Sep 4 15:46:19.819166 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 15:46:19.819761 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:46:19.829065 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 15:46:19.871280 augenrules[1706]: No rules Sep 4 15:46:19.872383 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 15:46:19.872617 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 15:46:19.873863 sudo[1683]: pam_unix(sudo:session): session closed for user root Sep 4 15:46:19.875397 sshd[1682]: Connection closed by 10.0.0.1 port 49274 Sep 4 15:46:19.875746 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:19.889212 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:49274.service: Deactivated successfully. Sep 4 15:46:19.890691 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 15:46:19.893378 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Sep 4 15:46:19.894625 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). Sep 4 15:46:19.895516 systemd-logind[1493]: Removed session 6. Sep 4 15:46:19.949769 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:E+dN53Zc2ac/SG1D7BMDq9afiZbfSZhP3o/CNSgjybU Sep 4 15:46:19.950998 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 15:46:19.955359 systemd-logind[1493]: New session 7 of user core. Sep 4 15:46:19.967468 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 15:46:20.019448 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 15:46:20.019996 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 15:46:20.031342 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 15:46:20.065512 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 15:46:20.066390 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 15:46:20.461325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:46:20.461487 systemd[1]: kubelet.service: Consumed 741ms CPU time, 256.8M memory peak. Sep 4 15:46:20.463391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:46:20.483855 systemd[1]: Reload requested from client PID 1760 ('systemctl') (unit session-7.scope)... Sep 4 15:46:20.483874 systemd[1]: Reloading... Sep 4 15:46:20.554320 zram_generator::config[1803]: No configuration found. Sep 4 15:46:20.766727 systemd[1]: Reloading finished in 282 ms. Sep 4 15:46:20.817813 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 15:46:20.817892 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 15:46:20.818131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:46:20.818178 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.1M memory peak. Sep 4 15:46:20.819795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 15:46:20.936467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 15:46:20.940733 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 15:46:20.976174 kubelet[1848]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 15:46:20.976174 kubelet[1848]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 15:46:20.976174 kubelet[1848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 15:46:20.976511 kubelet[1848]: I0904 15:46:20.976233 1848 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 15:46:21.830465 kubelet[1848]: I0904 15:46:21.830415 1848 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 15:46:21.830465 kubelet[1848]: I0904 15:46:21.830449 1848 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 15:46:21.830786 kubelet[1848]: I0904 15:46:21.830768 1848 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 15:46:21.856178 kubelet[1848]: I0904 15:46:21.855545 1848 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 15:46:21.861123 kubelet[1848]: I0904 15:46:21.861099 1848 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 15:46:21.864133 kubelet[1848]: I0904 15:46:21.864097 1848 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 15:46:21.865327 kubelet[1848]: I0904 15:46:21.865265 1848 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 15:46:21.865534 kubelet[1848]: I0904 15:46:21.865329 1848 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 15:46:21.865629 kubelet[1848]: I0904 15:46:21.865594 1848 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 15:46:21.865629 kubelet[1848]: I0904 15:46:21.865605 1848 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 15:46:21.865943 kubelet[1848]: I0904 15:46:21.865876 1848 state_mem.go:36] "Initialized new in-memory state store" Sep 4 15:46:21.868395 kubelet[1848]: I0904 15:46:21.868365 1848 kubelet.go:446] "Attempting to sync node with API server" Sep 4 15:46:21.868395 kubelet[1848]: I0904 15:46:21.868394 1848 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 15:46:21.868483 kubelet[1848]: I0904 15:46:21.868421 1848 kubelet.go:352] "Adding apiserver pod source" Sep 4 15:46:21.868483 kubelet[1848]: I0904 15:46:21.868438 1848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 15:46:21.868545 kubelet[1848]: E0904 15:46:21.868477 1848 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:21.868627 kubelet[1848]: E0904 15:46:21.868607 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:21.871020 kubelet[1848]: I0904 15:46:21.871000 1848 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 15:46:21.871767 kubelet[1848]: I0904 15:46:21.871746 1848 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 15:46:21.871948 kubelet[1848]: W0904 15:46:21.871936 1848 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 15:46:21.872869 kubelet[1848]: I0904 15:46:21.872845 1848 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 15:46:21.873029 kubelet[1848]: I0904 15:46:21.873016 1848 server.go:1287] "Started kubelet" Sep 4 15:46:21.876281 kubelet[1848]: I0904 15:46:21.875369 1848 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 15:46:21.876281 kubelet[1848]: I0904 15:46:21.876029 1848 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 15:46:21.876281 kubelet[1848]: I0904 15:46:21.876129 1848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 15:46:21.876281 kubelet[1848]: I0904 15:46:21.876214 1848 server.go:479] "Adding debug handlers to kubelet server" Sep 4 15:46:21.876281 kubelet[1848]: I0904 15:46:21.876256 1848 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 15:46:21.878569 kubelet[1848]: E0904 15:46:21.878516 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:21.878569 kubelet[1848]: I0904 15:46:21.878553 1848 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 15:46:21.878656 kubelet[1848]: I0904 15:46:21.878607 1848 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 15:46:21.878747 kubelet[1848]: I0904 15:46:21.878724 1848 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 15:46:21.878790 kubelet[1848]: I0904 15:46:21.878777 1848 reconciler.go:26] "Reconciler: start to sync state" Sep 4 15:46:21.879093 kubelet[1848]: E0904 15:46:21.879056 1848 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 15:46:21.879386 kubelet[1848]: I0904 15:46:21.879366 1848 factory.go:221] Registration of the systemd container factory successfully Sep 4 15:46:21.879489 kubelet[1848]: I0904 15:46:21.879469 1848 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 15:46:21.881018 kubelet[1848]: I0904 15:46:21.880980 1848 factory.go:221] Registration of the containerd container factory successfully Sep 4 15:46:21.887355 kubelet[1848]: W0904 15:46:21.887227 1848 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.45" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 4 15:46:21.888368 kubelet[1848]: E0904 15:46:21.887074 1848 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.45.18621ee6f19f672c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.45,UID:10.0.0.45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.45,},FirstTimestamp:2025-09-04 15:46:21.872981804 +0000 UTC m=+0.929272129,LastTimestamp:2025-09-04 15:46:21.872981804 +0000 UTC m=+0.929272129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.45,}" Sep 4 15:46:21.888368 kubelet[1848]: W0904 15:46:21.887903 1848 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 4 15:46:21.888368 kubelet[1848]: E0904 15:46:21.887897 1848 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.45\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 4 15:46:21.888368 kubelet[1848]: E0904 15:46:21.887921 1848 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 4 15:46:21.888368 kubelet[1848]: E0904 15:46:21.888109 1848 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.45\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Sep 4 15:46:21.888563 kubelet[1848]: W0904 15:46:21.888029 1848 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Sep 4 15:46:21.888563 kubelet[1848]: E0904 15:46:21.888502 1848 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Sep 4 15:46:21.897376 kubelet[1848]: I0904 15:46:21.897353 1848 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 15:46:21.897376 kubelet[1848]: I0904 15:46:21.897373 1848 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 15:46:21.897480 kubelet[1848]: I0904 15:46:21.897391 1848 state_mem.go:36] "Initialized new in-memory state store" Sep 4 15:46:21.979651 kubelet[1848]: E0904 15:46:21.979597 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:21.992034 kubelet[1848]: I0904 15:46:21.991713 1848 policy_none.go:49] "None policy: Start" Sep 4 15:46:21.992034 kubelet[1848]: I0904 15:46:21.991747 1848 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 15:46:21.992034 kubelet[1848]: I0904 15:46:21.991760 1848 state_mem.go:35] "Initializing new in-memory state store" Sep 4 15:46:21.997875 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 15:46:22.008986 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 15:46:22.012524 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 15:46:22.013500 kubelet[1848]: I0904 15:46:22.013466 1848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 15:46:22.014430 kubelet[1848]: I0904 15:46:22.014407 1848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 15:46:22.014430 kubelet[1848]: I0904 15:46:22.014431 1848 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 15:46:22.014602 kubelet[1848]: I0904 15:46:22.014457 1848 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 15:46:22.014602 kubelet[1848]: I0904 15:46:22.014465 1848 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 15:46:22.014602 kubelet[1848]: E0904 15:46:22.014565 1848 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 15:46:22.019234 kubelet[1848]: I0904 15:46:22.019142 1848 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 15:46:22.019740 kubelet[1848]: I0904 15:46:22.019360 1848 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 15:46:22.019740 kubelet[1848]: I0904 15:46:22.019374 1848 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 15:46:22.019740 kubelet[1848]: I0904 15:46:22.019625 1848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 15:46:22.020618 kubelet[1848]: E0904 15:46:22.020594 1848 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 15:46:22.020699 kubelet[1848]: E0904 15:46:22.020643 1848 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.45\" not found" Sep 4 15:46:22.093364 kubelet[1848]: E0904 15:46:22.092687 1848 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.45\" not found" node="10.0.0.45" Sep 4 15:46:22.120857 kubelet[1848]: I0904 15:46:22.120541 1848 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.45" Sep 4 15:46:22.128326 kubelet[1848]: I0904 15:46:22.128285 1848 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.45" Sep 4 15:46:22.128388 kubelet[1848]: E0904 15:46:22.128335 1848 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.45\": node \"10.0.0.45\" not found" Sep 4 15:46:22.136831 kubelet[1848]: E0904 15:46:22.136800 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.237425 kubelet[1848]: E0904 15:46:22.237390 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.337809 kubelet[1848]: E0904 15:46:22.337770 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.438612 kubelet[1848]: E0904 15:46:22.438567 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.471436 sudo[1719]: pam_unix(sudo:session): session closed for user root Sep 4 15:46:22.472859 sshd[1718]: Connection closed by 10.0.0.1 port 49288 Sep 4 15:46:22.473163 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 4 15:46:22.477102 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:49288.service: Deactivated successfully. Sep 4 15:46:22.478851 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 15:46:22.479030 systemd[1]: session-7.scope: Consumed 407ms CPU time, 73M memory peak. Sep 4 15:46:22.480046 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Sep 4 15:46:22.481155 systemd-logind[1493]: Removed session 7. Sep 4 15:46:22.538998 kubelet[1848]: E0904 15:46:22.538964 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.639481 kubelet[1848]: E0904 15:46:22.639455 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.739995 kubelet[1848]: E0904 15:46:22.739903 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.832481 kubelet[1848]: I0904 15:46:22.832445 1848 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 4 15:46:22.832632 kubelet[1848]: W0904 15:46:22.832595 1848 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 4 15:46:22.840594 kubelet[1848]: E0904 15:46:22.840572 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:22.868955 kubelet[1848]: E0904 15:46:22.868918 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:22.941588 kubelet[1848]: E0904 15:46:22.941546 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:23.042180 kubelet[1848]: E0904 15:46:23.042076 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:23.142613 kubelet[1848]: E0904 15:46:23.142568 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:23.243053 kubelet[1848]: E0904 15:46:23.243020 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:23.343637 kubelet[1848]: E0904 15:46:23.343547 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:23.444050 kubelet[1848]: E0904 15:46:23.444015 1848 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.45\" not found" Sep 4 15:46:23.545716 kubelet[1848]: I0904 15:46:23.545625 1848 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 4 15:46:23.545929 containerd[1507]: time="2025-09-04T15:46:23.545893560Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 15:46:23.546335 kubelet[1848]: I0904 15:46:23.546298 1848 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 4 15:46:23.869830 kubelet[1848]: E0904 15:46:23.869798 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:23.869830 kubelet[1848]: I0904 15:46:23.869818 1848 apiserver.go:52] "Watching apiserver" Sep 4 15:46:23.874219 kubelet[1848]: E0904 15:46:23.873882 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:23.879838 kubelet[1848]: I0904 15:46:23.879802 1848 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 15:46:23.883669 systemd[1]: Created slice kubepods-besteffort-pod3ca463c0_b05e_4688_bfef_09076445416c.slice - libcontainer container kubepods-besteffort-pod3ca463c0_b05e_4688_bfef_09076445416c.slice. Sep 4 15:46:23.891077 kubelet[1848]: I0904 15:46:23.891024 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/883dffe8-45c4-41b0-a3f0-0c53b89dd372-lib-modules\") pod \"kube-proxy-l7gnx\" (UID: \"883dffe8-45c4-41b0-a3f0-0c53b89dd372\") " pod="kube-system/kube-proxy-l7gnx" Sep 4 15:46:23.891077 kubelet[1848]: I0904 15:46:23.891078 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhbx\" (UniqueName: \"kubernetes.io/projected/883dffe8-45c4-41b0-a3f0-0c53b89dd372-kube-api-access-9vhbx\") pod \"kube-proxy-l7gnx\" (UID: \"883dffe8-45c4-41b0-a3f0-0c53b89dd372\") " pod="kube-system/kube-proxy-l7gnx" Sep 4 15:46:23.891211 kubelet[1848]: I0904 15:46:23.891099 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-cni-bin-dir\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891211 kubelet[1848]: I0904 15:46:23.891113 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98a9b3c2-bd87-49de-849d-b1a3195b1b9f-registration-dir\") pod \"csi-node-driver-4tbln\" (UID: \"98a9b3c2-bd87-49de-849d-b1a3195b1b9f\") " pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:23.891211 kubelet[1848]: I0904 15:46:23.891127 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98a9b3c2-bd87-49de-849d-b1a3195b1b9f-socket-dir\") pod \"csi-node-driver-4tbln\" (UID: \"98a9b3c2-bd87-49de-849d-b1a3195b1b9f\") " pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:23.891211 kubelet[1848]: I0904 15:46:23.891142 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/98a9b3c2-bd87-49de-849d-b1a3195b1b9f-varrun\") pod \"csi-node-driver-4tbln\" (UID: \"98a9b3c2-bd87-49de-849d-b1a3195b1b9f\") " pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:23.891211 kubelet[1848]: I0904 15:46:23.891157 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkg6p\" (UniqueName: \"kubernetes.io/projected/98a9b3c2-bd87-49de-849d-b1a3195b1b9f-kube-api-access-pkg6p\") pod \"csi-node-driver-4tbln\" (UID: \"98a9b3c2-bd87-49de-849d-b1a3195b1b9f\") " pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:23.891323 kubelet[1848]: I0904 15:46:23.891173 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/883dffe8-45c4-41b0-a3f0-0c53b89dd372-kube-proxy\") pod \"kube-proxy-l7gnx\" (UID: \"883dffe8-45c4-41b0-a3f0-0c53b89dd372\") " pod="kube-system/kube-proxy-l7gnx" Sep 4 15:46:23.891323 kubelet[1848]: I0904 15:46:23.891189 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-policysync\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891323 kubelet[1848]: I0904 15:46:23.891203 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ca463c0-b05e-4688-bfef-09076445416c-tigera-ca-bundle\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891323 kubelet[1848]: I0904 15:46:23.891216 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm8zr\" (UniqueName: \"kubernetes.io/projected/3ca463c0-b05e-4688-bfef-09076445416c-kube-api-access-fm8zr\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891323 kubelet[1848]: I0904 15:46:23.891230 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/883dffe8-45c4-41b0-a3f0-0c53b89dd372-xtables-lock\") pod \"kube-proxy-l7gnx\" (UID: \"883dffe8-45c4-41b0-a3f0-0c53b89dd372\") " pod="kube-system/kube-proxy-l7gnx" Sep 4 15:46:23.891428 kubelet[1848]: I0904 15:46:23.891245 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3ca463c0-b05e-4688-bfef-09076445416c-node-certs\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891428 kubelet[1848]: I0904 15:46:23.891260 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-var-lib-calico\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891428 kubelet[1848]: I0904 15:46:23.891273 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98a9b3c2-bd87-49de-849d-b1a3195b1b9f-kubelet-dir\") pod \"csi-node-driver-4tbln\" (UID: \"98a9b3c2-bd87-49de-849d-b1a3195b1b9f\") " pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:23.891428 kubelet[1848]: I0904 15:46:23.891290 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-cni-net-dir\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891428 kubelet[1848]: I0904 15:46:23.891334 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-flexvol-driver-host\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891519 kubelet[1848]: I0904 15:46:23.891352 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-var-run-calico\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891519 kubelet[1848]: I0904 15:46:23.891367 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-cni-log-dir\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891519 kubelet[1848]: I0904 15:46:23.891381 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-lib-modules\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.891519 kubelet[1848]: I0904 15:46:23.891396 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ca463c0-b05e-4688-bfef-09076445416c-xtables-lock\") pod \"calico-node-bz5sb\" (UID: \"3ca463c0-b05e-4688-bfef-09076445416c\") " pod="calico-system/calico-node-bz5sb" Sep 4 15:46:23.902137 systemd[1]: Created slice kubepods-besteffort-pod883dffe8_45c4_41b0_a3f0_0c53b89dd372.slice - libcontainer container kubepods-besteffort-pod883dffe8_45c4_41b0_a3f0_0c53b89dd372.slice. Sep 4 15:46:23.994065 kubelet[1848]: E0904 15:46:23.994030 1848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 15:46:23.994065 kubelet[1848]: W0904 15:46:23.994052 1848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 15:46:23.994200 kubelet[1848]: E0904 15:46:23.994078 1848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 15:46:23.996931 kubelet[1848]: E0904 15:46:23.996904 1848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 15:46:23.996931 kubelet[1848]: W0904 15:46:23.996923 1848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 15:46:23.997050 kubelet[1848]: E0904 15:46:23.996948 1848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 15:46:24.004692 kubelet[1848]: E0904 15:46:24.004654 1848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 15:46:24.004858 kubelet[1848]: W0904 15:46:24.004786 1848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 15:46:24.004858 kubelet[1848]: E0904 15:46:24.004810 1848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 15:46:24.005317 kubelet[1848]: E0904 15:46:24.005288 1848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 15:46:24.005461 kubelet[1848]: W0904 15:46:24.005437 1848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 15:46:24.005579 kubelet[1848]: E0904 15:46:24.005521 1848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 15:46:24.005773 kubelet[1848]: E0904 15:46:24.005760 1848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 15:46:24.005895 kubelet[1848]: W0904 15:46:24.005840 1848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 15:46:24.005895 kubelet[1848]: E0904 15:46:24.005874 1848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 15:46:24.202435 containerd[1507]: time="2025-09-04T15:46:24.202386404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bz5sb,Uid:3ca463c0-b05e-4688-bfef-09076445416c,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:24.204495 kubelet[1848]: E0904 15:46:24.204445 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:24.205395 containerd[1507]: time="2025-09-04T15:46:24.205368351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7gnx,Uid:883dffe8-45c4-41b0-a3f0-0c53b89dd372,Namespace:kube-system,Attempt:0,}" Sep 4 15:46:24.765024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358700058.mount: Deactivated successfully. Sep 4 15:46:24.771355 containerd[1507]: time="2025-09-04T15:46:24.771293047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:46:24.772342 containerd[1507]: time="2025-09-04T15:46:24.772294275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 4 15:46:24.773013 containerd[1507]: time="2025-09-04T15:46:24.772981965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:46:24.776429 containerd[1507]: time="2025-09-04T15:46:24.773660704Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:46:24.776603 containerd[1507]: time="2025-09-04T15:46:24.774562161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Sep 4 15:46:24.778697 containerd[1507]: time="2025-09-04T15:46:24.778663566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 15:46:24.779460 containerd[1507]: time="2025-09-04T15:46:24.779428851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 569.577911ms" Sep 4 15:46:24.780716 containerd[1507]: time="2025-09-04T15:46:24.780679007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 569.703654ms" Sep 4 15:46:24.795457 containerd[1507]: time="2025-09-04T15:46:24.795411494Z" level=info msg="connecting to shim f2e606e99619c5ac098828dd2caad6dd866d953c7d97d072708fc8b11b492bcf" address="unix:///run/containerd/s/724219dd4adad6d744e4e96cac4b3cc685aa2a49f7a2bd3b53d3ee8e72a5f933" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:24.796939 containerd[1507]: time="2025-09-04T15:46:24.796894277Z" level=info msg="connecting to shim c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981" address="unix:///run/containerd/s/d8ff4f5ef3e1b084d499f98a542fdd9486389531733ac3097ad396a381f11de2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:24.819464 systemd[1]: Started cri-containerd-c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981.scope - libcontainer container c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981. Sep 4 15:46:24.820620 systemd[1]: Started cri-containerd-f2e606e99619c5ac098828dd2caad6dd866d953c7d97d072708fc8b11b492bcf.scope - libcontainer container f2e606e99619c5ac098828dd2caad6dd866d953c7d97d072708fc8b11b492bcf. Sep 4 15:46:24.846814 containerd[1507]: time="2025-09-04T15:46:24.846774058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bz5sb,Uid:3ca463c0-b05e-4688-bfef-09076445416c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\"" Sep 4 15:46:24.848469 containerd[1507]: time="2025-09-04T15:46:24.848440679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7gnx,Uid:883dffe8-45c4-41b0-a3f0-0c53b89dd372,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e606e99619c5ac098828dd2caad6dd866d953c7d97d072708fc8b11b492bcf\"" Sep 4 15:46:24.849253 kubelet[1848]: E0904 15:46:24.849211 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:24.849463 containerd[1507]: time="2025-09-04T15:46:24.849374780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 15:46:24.870753 kubelet[1848]: E0904 15:46:24.870715 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:25.871154 kubelet[1848]: E0904 15:46:25.871103 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:26.016629 kubelet[1848]: E0904 15:46:26.016570 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:26.871337 kubelet[1848]: E0904 15:46:26.871232 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:27.871739 kubelet[1848]: E0904 15:46:27.871669 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:28.015767 kubelet[1848]: E0904 15:46:28.015700 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:28.871916 kubelet[1848]: E0904 15:46:28.871871 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:29.872447 kubelet[1848]: E0904 15:46:29.872399 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:29.907271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355517179.mount: Deactivated successfully. Sep 4 15:46:29.987114 containerd[1507]: time="2025-09-04T15:46:29.987060617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:29.988277 containerd[1507]: time="2025-09-04T15:46:29.988234689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 4 15:46:29.988970 containerd[1507]: time="2025-09-04T15:46:29.988934777Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:29.991389 containerd[1507]: time="2025-09-04T15:46:29.991355624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:29.992754 containerd[1507]: time="2025-09-04T15:46:29.992719827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 5.143294699s" Sep 4 15:46:29.992754 containerd[1507]: time="2025-09-04T15:46:29.992752441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 4 15:46:29.993703 containerd[1507]: time="2025-09-04T15:46:29.993672994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 15:46:29.995406 containerd[1507]: time="2025-09-04T15:46:29.995282563Z" level=info msg="CreateContainer within sandbox \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 15:46:30.010413 containerd[1507]: time="2025-09-04T15:46:30.009925862Z" level=info msg="Container 05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:30.011950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934839192.mount: Deactivated successfully. Sep 4 15:46:30.015783 kubelet[1848]: E0904 15:46:30.015612 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:30.022362 containerd[1507]: time="2025-09-04T15:46:30.022176431Z" level=info msg="CreateContainer within sandbox \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\"" Sep 4 15:46:30.022913 containerd[1507]: time="2025-09-04T15:46:30.022868918Z" level=info msg="StartContainer for \"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\"" Sep 4 15:46:30.024386 containerd[1507]: time="2025-09-04T15:46:30.024353379Z" level=info msg="connecting to shim 05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f" address="unix:///run/containerd/s/d8ff4f5ef3e1b084d499f98a542fdd9486389531733ac3097ad396a381f11de2" protocol=ttrpc version=3 Sep 4 15:46:30.050504 systemd[1]: Started cri-containerd-05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f.scope - libcontainer container 05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f. Sep 4 15:46:30.084066 containerd[1507]: time="2025-09-04T15:46:30.084033628Z" level=info msg="StartContainer for \"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\" returns successfully" Sep 4 15:46:30.093169 systemd[1]: cri-containerd-05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f.scope: Deactivated successfully. Sep 4 15:46:30.097807 containerd[1507]: time="2025-09-04T15:46:30.097617570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\" id:\"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\" pid:2018 exited_at:{seconds:1757000790 nanos:95524480}" Sep 4 15:46:30.097807 containerd[1507]: time="2025-09-04T15:46:30.097664695Z" level=info msg="received exit event container_id:\"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\" id:\"05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f\" pid:2018 exited_at:{seconds:1757000790 nanos:95524480}" Sep 4 15:46:30.872954 kubelet[1848]: E0904 15:46:30.872792 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:30.876757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05e3225fe42aa8116a5e36b5ebfddafa7d83560e2684ccd663e5f85294f4f01f-rootfs.mount: Deactivated successfully. Sep 4 15:46:31.050788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121891948.mount: Deactivated successfully. Sep 4 15:46:31.293022 containerd[1507]: time="2025-09-04T15:46:31.292917252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:31.293847 containerd[1507]: time="2025-09-04T15:46:31.293731127Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 4 15:46:31.294509 containerd[1507]: time="2025-09-04T15:46:31.294476969Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:31.296249 containerd[1507]: time="2025-09-04T15:46:31.296224356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:31.296775 containerd[1507]: time="2025-09-04T15:46:31.296714256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.302941341s" Sep 4 15:46:31.296775 containerd[1507]: time="2025-09-04T15:46:31.296743436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 4 15:46:31.297707 containerd[1507]: time="2025-09-04T15:46:31.297686301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 15:46:31.298588 containerd[1507]: time="2025-09-04T15:46:31.298566291Z" level=info msg="CreateContainer within sandbox \"f2e606e99619c5ac098828dd2caad6dd866d953c7d97d072708fc8b11b492bcf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 15:46:31.306397 containerd[1507]: time="2025-09-04T15:46:31.306362119Z" level=info msg="Container ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:31.312482 containerd[1507]: time="2025-09-04T15:46:31.312445976Z" level=info msg="CreateContainer within sandbox \"f2e606e99619c5ac098828dd2caad6dd866d953c7d97d072708fc8b11b492bcf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78\"" Sep 4 15:46:31.312861 containerd[1507]: time="2025-09-04T15:46:31.312836665Z" level=info msg="StartContainer for \"ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78\"" Sep 4 15:46:31.314133 containerd[1507]: time="2025-09-04T15:46:31.314109501Z" level=info msg="connecting to shim ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78" address="unix:///run/containerd/s/724219dd4adad6d744e4e96cac4b3cc685aa2a49f7a2bd3b53d3ee8e72a5f933" protocol=ttrpc version=3 Sep 4 15:46:31.335471 systemd[1]: Started cri-containerd-ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78.scope - libcontainer container ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78. Sep 4 15:46:31.366097 containerd[1507]: time="2025-09-04T15:46:31.366061679Z" level=info msg="StartContainer for \"ef5990e2f68d8e31f0fb5ed078312f58db8bc2499fcd574e7d77987a516a0e78\" returns successfully" Sep 4 15:46:31.873499 kubelet[1848]: E0904 15:46:31.873455 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:32.014864 kubelet[1848]: E0904 15:46:32.014771 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:32.040359 kubelet[1848]: E0904 15:46:32.040069 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:32.049141 kubelet[1848]: I0904 15:46:32.049075 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l7gnx" podStartSLOduration=3.6010933080000003 podStartE2EDuration="10.049062249s" podCreationTimestamp="2025-09-04 15:46:22 +0000 UTC" firstStartedPulling="2025-09-04 15:46:24.849632859 +0000 UTC m=+3.905923184" lastFinishedPulling="2025-09-04 15:46:31.29760184 +0000 UTC m=+10.353892125" observedRunningTime="2025-09-04 15:46:32.049056973 +0000 UTC m=+11.105347338" watchObservedRunningTime="2025-09-04 15:46:32.049062249 +0000 UTC m=+11.105352574" Sep 4 15:46:32.873978 kubelet[1848]: E0904 15:46:32.873944 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:33.041246 kubelet[1848]: E0904 15:46:33.041151 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:33.875080 kubelet[1848]: E0904 15:46:33.874994 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:34.015834 kubelet[1848]: E0904 15:46:34.015521 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:34.876019 kubelet[1848]: E0904 15:46:34.875979 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:35.876696 kubelet[1848]: E0904 15:46:35.876653 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:36.015150 kubelet[1848]: E0904 15:46:36.015061 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:36.877029 kubelet[1848]: E0904 15:46:36.876980 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:37.878151 kubelet[1848]: E0904 15:46:37.878114 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:38.015328 kubelet[1848]: E0904 15:46:38.015247 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:38.120276 containerd[1507]: time="2025-09-04T15:46:38.119662225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:38.120276 containerd[1507]: time="2025-09-04T15:46:38.120244928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 4 15:46:38.120976 containerd[1507]: time="2025-09-04T15:46:38.120949736Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:38.123521 containerd[1507]: time="2025-09-04T15:46:38.122878165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:38.123521 containerd[1507]: time="2025-09-04T15:46:38.123403293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 6.825691247s" Sep 4 15:46:38.123521 containerd[1507]: time="2025-09-04T15:46:38.123429521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 4 15:46:38.125197 containerd[1507]: time="2025-09-04T15:46:38.125173471Z" level=info msg="CreateContainer within sandbox \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 15:46:38.131375 containerd[1507]: time="2025-09-04T15:46:38.131108849Z" level=info msg="Container d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:38.140462 containerd[1507]: time="2025-09-04T15:46:38.140403024Z" level=info msg="CreateContainer within sandbox \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\"" Sep 4 15:46:38.140956 containerd[1507]: time="2025-09-04T15:46:38.140929871Z" level=info msg="StartContainer for \"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\"" Sep 4 15:46:38.142918 containerd[1507]: time="2025-09-04T15:46:38.142287191Z" level=info msg="connecting to shim d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d" address="unix:///run/containerd/s/d8ff4f5ef3e1b084d499f98a542fdd9486389531733ac3097ad396a381f11de2" protocol=ttrpc version=3 Sep 4 15:46:38.160554 systemd[1]: Started cri-containerd-d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d.scope - libcontainer container d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d. Sep 4 15:46:38.190720 containerd[1507]: time="2025-09-04T15:46:38.190677777Z" level=info msg="StartContainer for \"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\" returns successfully" Sep 4 15:46:38.717469 containerd[1507]: time="2025-09-04T15:46:38.717378966Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 15:46:38.719103 systemd[1]: cri-containerd-d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d.scope: Deactivated successfully. Sep 4 15:46:38.719443 systemd[1]: cri-containerd-d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d.scope: Consumed 425ms CPU time, 207.4M memory peak, 165.8M written to disk. Sep 4 15:46:38.721134 containerd[1507]: time="2025-09-04T15:46:38.721056621Z" level=info msg="received exit event container_id:\"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\" id:\"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\" pid:2244 exited_at:{seconds:1757000798 nanos:720869544}" Sep 4 15:46:38.721199 containerd[1507]: time="2025-09-04T15:46:38.721136586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\" id:\"d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d\" pid:2244 exited_at:{seconds:1757000798 nanos:720869544}" Sep 4 15:46:38.737559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d73c4d24212a88b59d16558358ad28cc379bea564a89326fcfb4a091ad70ae0d-rootfs.mount: Deactivated successfully. Sep 4 15:46:38.811510 kubelet[1848]: I0904 15:46:38.810954 1848 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 15:46:38.858543 systemd[1]: Created slice kubepods-burstable-pod38c417fb_b280_4f0a_ba6b_ae890b1ccfc3.slice - libcontainer container kubepods-burstable-pod38c417fb_b280_4f0a_ba6b_ae890b1ccfc3.slice. Sep 4 15:46:38.878370 kubelet[1848]: E0904 15:46:38.878331 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:38.881883 systemd[1]: Created slice kubepods-burstable-pod8278b82f_9e5a_4962_981e_5deac349f2fe.slice - libcontainer container kubepods-burstable-pod8278b82f_9e5a_4962_981e_5deac349f2fe.slice. Sep 4 15:46:38.894740 systemd[1]: Created slice kubepods-besteffort-pod39da1048_d7f6_4184_8f7e_1f20fd878471.slice - libcontainer container kubepods-besteffort-pod39da1048_d7f6_4184_8f7e_1f20fd878471.slice. Sep 4 15:46:38.894850 kubelet[1848]: I0904 15:46:38.894772 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38c417fb-b280-4f0a-ba6b-ae890b1ccfc3-config-volume\") pod \"coredns-668d6bf9bc-jwndq\" (UID: \"38c417fb-b280-4f0a-ba6b-ae890b1ccfc3\") " pod="kube-system/coredns-668d6bf9bc-jwndq" Sep 4 15:46:38.894850 kubelet[1848]: I0904 15:46:38.894804 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee38ed2-b930-4b4f-a878-aa34375eb46d-config\") pod \"goldmane-54d579b49d-92gzs\" (UID: \"eee38ed2-b930-4b4f-a878-aa34375eb46d\") " pod="calico-system/goldmane-54d579b49d-92gzs" Sep 4 15:46:38.894850 kubelet[1848]: I0904 15:46:38.894821 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtsc\" (UniqueName: \"kubernetes.io/projected/8278b82f-9e5a-4962-981e-5deac349f2fe-kube-api-access-lbtsc\") pod \"coredns-668d6bf9bc-zf8sn\" (UID: \"8278b82f-9e5a-4962-981e-5deac349f2fe\") " pod="kube-system/coredns-668d6bf9bc-zf8sn" Sep 4 15:46:38.894850 kubelet[1848]: I0904 15:46:38.894842 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39da1048-d7f6-4184-8f7e-1f20fd878471-calico-apiserver-certs\") pod \"calico-apiserver-857d4f49b9-497bn\" (UID: \"39da1048-d7f6-4184-8f7e-1f20fd878471\") " pod="calico-apiserver/calico-apiserver-857d4f49b9-497bn" Sep 4 15:46:38.894964 kubelet[1848]: I0904 15:46:38.894859 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjkw7\" (UniqueName: \"kubernetes.io/projected/39da1048-d7f6-4184-8f7e-1f20fd878471-kube-api-access-hjkw7\") pod \"calico-apiserver-857d4f49b9-497bn\" (UID: \"39da1048-d7f6-4184-8f7e-1f20fd878471\") " pod="calico-apiserver/calico-apiserver-857d4f49b9-497bn" Sep 4 15:46:38.894964 kubelet[1848]: I0904 15:46:38.894874 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8278b82f-9e5a-4962-981e-5deac349f2fe-config-volume\") pod \"coredns-668d6bf9bc-zf8sn\" (UID: \"8278b82f-9e5a-4962-981e-5deac349f2fe\") " pod="kube-system/coredns-668d6bf9bc-zf8sn" Sep 4 15:46:38.894964 kubelet[1848]: I0904 15:46:38.894891 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqczj\" (UniqueName: \"kubernetes.io/projected/178e1fbe-3648-4cc5-ab1e-eff6183d2fa0-kube-api-access-zqczj\") pod \"calico-apiserver-857d4f49b9-p879q\" (UID: \"178e1fbe-3648-4cc5-ab1e-eff6183d2fa0\") " pod="calico-apiserver/calico-apiserver-857d4f49b9-p879q" Sep 4 15:46:38.894964 kubelet[1848]: I0904 15:46:38.894908 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5580c510-f994-4e0d-b8b6-f1d1fdc65ef4-tigera-ca-bundle\") pod \"calico-kube-controllers-68dbb789b-bbg75\" (UID: \"5580c510-f994-4e0d-b8b6-f1d1fdc65ef4\") " pod="calico-system/calico-kube-controllers-68dbb789b-bbg75" Sep 4 15:46:38.894964 kubelet[1848]: I0904 15:46:38.894948 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eee38ed2-b930-4b4f-a878-aa34375eb46d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-92gzs\" (UID: \"eee38ed2-b930-4b4f-a878-aa34375eb46d\") " pod="calico-system/goldmane-54d579b49d-92gzs" Sep 4 15:46:38.895063 kubelet[1848]: I0904 15:46:38.894966 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4241afa6-f0f9-4549-8438-e207627c5501-whisker-backend-key-pair\") pod \"whisker-b946ffd57-vlmjj\" (UID: \"4241afa6-f0f9-4549-8438-e207627c5501\") " pod="calico-system/whisker-b946ffd57-vlmjj" Sep 4 15:46:38.895063 kubelet[1848]: I0904 15:46:38.894985 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz4j7\" (UniqueName: \"kubernetes.io/projected/eee38ed2-b930-4b4f-a878-aa34375eb46d-kube-api-access-qz4j7\") pod \"goldmane-54d579b49d-92gzs\" (UID: \"eee38ed2-b930-4b4f-a878-aa34375eb46d\") " pod="calico-system/goldmane-54d579b49d-92gzs" Sep 4 15:46:38.895154 kubelet[1848]: I0904 15:46:38.895123 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/178e1fbe-3648-4cc5-ab1e-eff6183d2fa0-calico-apiserver-certs\") pod \"calico-apiserver-857d4f49b9-p879q\" (UID: \"178e1fbe-3648-4cc5-ab1e-eff6183d2fa0\") " pod="calico-apiserver/calico-apiserver-857d4f49b9-p879q" Sep 4 15:46:38.895251 kubelet[1848]: I0904 15:46:38.895209 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xzl\" (UniqueName: \"kubernetes.io/projected/5580c510-f994-4e0d-b8b6-f1d1fdc65ef4-kube-api-access-s7xzl\") pod \"calico-kube-controllers-68dbb789b-bbg75\" (UID: \"5580c510-f994-4e0d-b8b6-f1d1fdc65ef4\") " pod="calico-system/calico-kube-controllers-68dbb789b-bbg75" Sep 4 15:46:38.895251 kubelet[1848]: I0904 15:46:38.895245 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eee38ed2-b930-4b4f-a878-aa34375eb46d-goldmane-key-pair\") pod \"goldmane-54d579b49d-92gzs\" (UID: \"eee38ed2-b930-4b4f-a878-aa34375eb46d\") " pod="calico-system/goldmane-54d579b49d-92gzs" Sep 4 15:46:38.895300 kubelet[1848]: I0904 15:46:38.895267 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4241afa6-f0f9-4549-8438-e207627c5501-whisker-ca-bundle\") pod \"whisker-b946ffd57-vlmjj\" (UID: \"4241afa6-f0f9-4549-8438-e207627c5501\") " pod="calico-system/whisker-b946ffd57-vlmjj" Sep 4 15:46:38.895300 kubelet[1848]: I0904 15:46:38.895285 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njh2n\" (UniqueName: \"kubernetes.io/projected/38c417fb-b280-4f0a-ba6b-ae890b1ccfc3-kube-api-access-njh2n\") pod \"coredns-668d6bf9bc-jwndq\" (UID: \"38c417fb-b280-4f0a-ba6b-ae890b1ccfc3\") " pod="kube-system/coredns-668d6bf9bc-jwndq" Sep 4 15:46:38.895300 kubelet[1848]: I0904 15:46:38.895300 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqknb\" (UniqueName: \"kubernetes.io/projected/4241afa6-f0f9-4549-8438-e207627c5501-kube-api-access-hqknb\") pod \"whisker-b946ffd57-vlmjj\" (UID: \"4241afa6-f0f9-4549-8438-e207627c5501\") " pod="calico-system/whisker-b946ffd57-vlmjj" Sep 4 15:46:38.900510 systemd[1]: Created slice kubepods-besteffort-pod178e1fbe_3648_4cc5_ab1e_eff6183d2fa0.slice - libcontainer container kubepods-besteffort-pod178e1fbe_3648_4cc5_ab1e_eff6183d2fa0.slice. Sep 4 15:46:38.905412 systemd[1]: Created slice kubepods-besteffort-pod5580c510_f994_4e0d_b8b6_f1d1fdc65ef4.slice - libcontainer container kubepods-besteffort-pod5580c510_f994_4e0d_b8b6_f1d1fdc65ef4.slice. Sep 4 15:46:38.910330 systemd[1]: Created slice kubepods-besteffort-pod4241afa6_f0f9_4549_8438_e207627c5501.slice - libcontainer container kubepods-besteffort-pod4241afa6_f0f9_4549_8438_e207627c5501.slice. Sep 4 15:46:38.913086 systemd[1]: Created slice kubepods-besteffort-podeee38ed2_b930_4b4f_a878_aa34375eb46d.slice - libcontainer container kubepods-besteffort-podeee38ed2_b930_4b4f_a878_aa34375eb46d.slice. Sep 4 15:46:39.055492 containerd[1507]: time="2025-09-04T15:46:39.054801519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 15:46:39.180011 kubelet[1848]: E0904 15:46:39.179692 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:39.180631 containerd[1507]: time="2025-09-04T15:46:39.180342013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jwndq,Uid:38c417fb-b280-4f0a-ba6b-ae890b1ccfc3,Namespace:kube-system,Attempt:0,}" Sep 4 15:46:39.192866 kubelet[1848]: E0904 15:46:39.192796 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:39.193404 containerd[1507]: time="2025-09-04T15:46:39.193360342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zf8sn,Uid:8278b82f-9e5a-4962-981e-5deac349f2fe,Namespace:kube-system,Attempt:0,}" Sep 4 15:46:39.198467 containerd[1507]: time="2025-09-04T15:46:39.198425725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-497bn,Uid:39da1048-d7f6-4184-8f7e-1f20fd878471,Namespace:calico-apiserver,Attempt:0,}" Sep 4 15:46:39.204520 containerd[1507]: time="2025-09-04T15:46:39.204479098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-p879q,Uid:178e1fbe-3648-4cc5-ab1e-eff6183d2fa0,Namespace:calico-apiserver,Attempt:0,}" Sep 4 15:46:39.209439 containerd[1507]: time="2025-09-04T15:46:39.209310337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbb789b-bbg75,Uid:5580c510-f994-4e0d-b8b6-f1d1fdc65ef4,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:39.215385 containerd[1507]: time="2025-09-04T15:46:39.215330804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b946ffd57-vlmjj,Uid:4241afa6-f0f9-4549-8438-e207627c5501,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:39.215834 containerd[1507]: time="2025-09-04T15:46:39.215757268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-92gzs,Uid:eee38ed2-b930-4b4f-a878-aa34375eb46d,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:39.267659 containerd[1507]: time="2025-09-04T15:46:39.267464096Z" level=error msg="Failed to destroy network for sandbox \"489f88d9638d632bbda033ee14494bace0c89b098b28b95f7a18cac371ddec78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.270863 containerd[1507]: time="2025-09-04T15:46:39.270249023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zf8sn,Uid:8278b82f-9e5a-4962-981e-5deac349f2fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"489f88d9638d632bbda033ee14494bace0c89b098b28b95f7a18cac371ddec78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.270987 kubelet[1848]: E0904 15:46:39.270669 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"489f88d9638d632bbda033ee14494bace0c89b098b28b95f7a18cac371ddec78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.270987 kubelet[1848]: E0904 15:46:39.270759 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"489f88d9638d632bbda033ee14494bace0c89b098b28b95f7a18cac371ddec78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zf8sn" Sep 4 15:46:39.270987 kubelet[1848]: E0904 15:46:39.270778 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"489f88d9638d632bbda033ee14494bace0c89b098b28b95f7a18cac371ddec78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zf8sn" Sep 4 15:46:39.271089 kubelet[1848]: E0904 15:46:39.270822 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zf8sn_kube-system(8278b82f-9e5a-4962-981e-5deac349f2fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zf8sn_kube-system(8278b82f-9e5a-4962-981e-5deac349f2fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"489f88d9638d632bbda033ee14494bace0c89b098b28b95f7a18cac371ddec78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zf8sn" podUID="8278b82f-9e5a-4962-981e-5deac349f2fe" Sep 4 15:46:39.278339 containerd[1507]: time="2025-09-04T15:46:39.277663113Z" level=error msg="Failed to destroy network for sandbox \"e98d77499cba582e7dc6102a2de7beccfca4c15b3956fb95a73e171a46ebe7a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.278600 containerd[1507]: time="2025-09-04T15:46:39.277664472Z" level=error msg="Failed to destroy network for sandbox \"68ffdc707b4292af9a0da9ceefe9ead52818fd288afda0dbd02795c1d7a0bac4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.279270 containerd[1507]: time="2025-09-04T15:46:39.279221028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jwndq,Uid:38c417fb-b280-4f0a-ba6b-ae890b1ccfc3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98d77499cba582e7dc6102a2de7beccfca4c15b3956fb95a73e171a46ebe7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.279531 kubelet[1848]: E0904 15:46:39.279501 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98d77499cba582e7dc6102a2de7beccfca4c15b3956fb95a73e171a46ebe7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.279752 kubelet[1848]: E0904 15:46:39.279636 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98d77499cba582e7dc6102a2de7beccfca4c15b3956fb95a73e171a46ebe7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jwndq" Sep 4 15:46:39.279752 kubelet[1848]: E0904 15:46:39.279660 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98d77499cba582e7dc6102a2de7beccfca4c15b3956fb95a73e171a46ebe7a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jwndq" Sep 4 15:46:39.279752 kubelet[1848]: E0904 15:46:39.279702 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jwndq_kube-system(38c417fb-b280-4f0a-ba6b-ae890b1ccfc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jwndq_kube-system(38c417fb-b280-4f0a-ba6b-ae890b1ccfc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e98d77499cba582e7dc6102a2de7beccfca4c15b3956fb95a73e171a46ebe7a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jwndq" podUID="38c417fb-b280-4f0a-ba6b-ae890b1ccfc3" Sep 4 15:46:39.280939 containerd[1507]: time="2025-09-04T15:46:39.280897334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-497bn,Uid:39da1048-d7f6-4184-8f7e-1f20fd878471,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ffdc707b4292af9a0da9ceefe9ead52818fd288afda0dbd02795c1d7a0bac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.281195 kubelet[1848]: E0904 15:46:39.281169 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ffdc707b4292af9a0da9ceefe9ead52818fd288afda0dbd02795c1d7a0bac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.281339 kubelet[1848]: E0904 15:46:39.281289 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ffdc707b4292af9a0da9ceefe9ead52818fd288afda0dbd02795c1d7a0bac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857d4f49b9-497bn" Sep 4 15:46:39.281408 kubelet[1848]: E0904 15:46:39.281390 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ffdc707b4292af9a0da9ceefe9ead52818fd288afda0dbd02795c1d7a0bac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857d4f49b9-497bn" Sep 4 15:46:39.281520 kubelet[1848]: E0904 15:46:39.281482 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857d4f49b9-497bn_calico-apiserver(39da1048-d7f6-4184-8f7e-1f20fd878471)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857d4f49b9-497bn_calico-apiserver(39da1048-d7f6-4184-8f7e-1f20fd878471)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68ffdc707b4292af9a0da9ceefe9ead52818fd288afda0dbd02795c1d7a0bac4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857d4f49b9-497bn" podUID="39da1048-d7f6-4184-8f7e-1f20fd878471" Sep 4 15:46:39.291473 containerd[1507]: time="2025-09-04T15:46:39.291433331Z" level=error msg="Failed to destroy network for sandbox \"a0ef6acaae593e2c81c6b7eab929255990080b62079f34f6bc3b9cd83f3d96b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.293496 containerd[1507]: time="2025-09-04T15:46:39.293457772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-p879q,Uid:178e1fbe-3648-4cc5-ab1e-eff6183d2fa0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef6acaae593e2c81c6b7eab929255990080b62079f34f6bc3b9cd83f3d96b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.293740 kubelet[1848]: E0904 15:46:39.293711 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef6acaae593e2c81c6b7eab929255990080b62079f34f6bc3b9cd83f3d96b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.294083 kubelet[1848]: E0904 15:46:39.293823 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef6acaae593e2c81c6b7eab929255990080b62079f34f6bc3b9cd83f3d96b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857d4f49b9-p879q" Sep 4 15:46:39.294083 kubelet[1848]: E0904 15:46:39.293844 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef6acaae593e2c81c6b7eab929255990080b62079f34f6bc3b9cd83f3d96b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857d4f49b9-p879q" Sep 4 15:46:39.294083 kubelet[1848]: E0904 15:46:39.293878 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857d4f49b9-p879q_calico-apiserver(178e1fbe-3648-4cc5-ab1e-eff6183d2fa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857d4f49b9-p879q_calico-apiserver(178e1fbe-3648-4cc5-ab1e-eff6183d2fa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0ef6acaae593e2c81c6b7eab929255990080b62079f34f6bc3b9cd83f3d96b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857d4f49b9-p879q" podUID="178e1fbe-3648-4cc5-ab1e-eff6183d2fa0" Sep 4 15:46:39.299834 containerd[1507]: time="2025-09-04T15:46:39.299787511Z" level=error msg="Failed to destroy network for sandbox \"7c2fd12d8fe4df301236da61f23e6423b5d74f16599ba42083e750185d23c1b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.302713 containerd[1507]: time="2025-09-04T15:46:39.302655963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbb789b-bbg75,Uid:5580c510-f994-4e0d-b8b6-f1d1fdc65ef4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2fd12d8fe4df301236da61f23e6423b5d74f16599ba42083e750185d23c1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.303224 kubelet[1848]: E0904 15:46:39.303035 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2fd12d8fe4df301236da61f23e6423b5d74f16599ba42083e750185d23c1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.303224 kubelet[1848]: E0904 15:46:39.303102 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2fd12d8fe4df301236da61f23e6423b5d74f16599ba42083e750185d23c1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68dbb789b-bbg75" Sep 4 15:46:39.303224 kubelet[1848]: E0904 15:46:39.303119 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2fd12d8fe4df301236da61f23e6423b5d74f16599ba42083e750185d23c1b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68dbb789b-bbg75" Sep 4 15:46:39.303345 kubelet[1848]: E0904 15:46:39.303190 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68dbb789b-bbg75_calico-system(5580c510-f994-4e0d-b8b6-f1d1fdc65ef4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68dbb789b-bbg75_calico-system(5580c510-f994-4e0d-b8b6-f1d1fdc65ef4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c2fd12d8fe4df301236da61f23e6423b5d74f16599ba42083e750185d23c1b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68dbb789b-bbg75" podUID="5580c510-f994-4e0d-b8b6-f1d1fdc65ef4" Sep 4 15:46:39.316175 containerd[1507]: time="2025-09-04T15:46:39.315908236Z" level=error msg="Failed to destroy network for sandbox \"3f94ce349c9ece38c48273854574db709db8e2a81232ddaf23df4a7b71f8634c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.317056 containerd[1507]: time="2025-09-04T15:46:39.316997025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-92gzs,Uid:eee38ed2-b930-4b4f-a878-aa34375eb46d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f94ce349c9ece38c48273854574db709db8e2a81232ddaf23df4a7b71f8634c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.318760 kubelet[1848]: E0904 15:46:39.317521 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f94ce349c9ece38c48273854574db709db8e2a81232ddaf23df4a7b71f8634c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.318852 kubelet[1848]: E0904 15:46:39.318813 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f94ce349c9ece38c48273854574db709db8e2a81232ddaf23df4a7b71f8634c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-92gzs" Sep 4 15:46:39.318852 kubelet[1848]: E0904 15:46:39.318847 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f94ce349c9ece38c48273854574db709db8e2a81232ddaf23df4a7b71f8634c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-92gzs" Sep 4 15:46:39.319139 kubelet[1848]: E0904 15:46:39.318894 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-92gzs_calico-system(eee38ed2-b930-4b4f-a878-aa34375eb46d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-92gzs_calico-system(eee38ed2-b930-4b4f-a878-aa34375eb46d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f94ce349c9ece38c48273854574db709db8e2a81232ddaf23df4a7b71f8634c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-92gzs" podUID="eee38ed2-b930-4b4f-a878-aa34375eb46d" Sep 4 15:46:39.319194 containerd[1507]: time="2025-09-04T15:46:39.318977725Z" level=error msg="Failed to destroy network for sandbox \"ff52577de151878e3eb0c89ef4491f781a1688e88dd880e7fc32e02d72819a6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.320364 containerd[1507]: time="2025-09-04T15:46:39.320330924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b946ffd57-vlmjj,Uid:4241afa6-f0f9-4549-8438-e207627c5501,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff52577de151878e3eb0c89ef4491f781a1688e88dd880e7fc32e02d72819a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.320631 kubelet[1848]: E0904 15:46:39.320606 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff52577de151878e3eb0c89ef4491f781a1688e88dd880e7fc32e02d72819a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:39.320691 kubelet[1848]: E0904 15:46:39.320648 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff52577de151878e3eb0c89ef4491f781a1688e88dd880e7fc32e02d72819a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b946ffd57-vlmjj" Sep 4 15:46:39.320691 kubelet[1848]: E0904 15:46:39.320665 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff52577de151878e3eb0c89ef4491f781a1688e88dd880e7fc32e02d72819a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b946ffd57-vlmjj" Sep 4 15:46:39.320735 kubelet[1848]: E0904 15:46:39.320694 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b946ffd57-vlmjj_calico-system(4241afa6-f0f9-4549-8438-e207627c5501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b946ffd57-vlmjj_calico-system(4241afa6-f0f9-4549-8438-e207627c5501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff52577de151878e3eb0c89ef4491f781a1688e88dd880e7fc32e02d72819a6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b946ffd57-vlmjj" podUID="4241afa6-f0f9-4549-8438-e207627c5501" Sep 4 15:46:39.878980 kubelet[1848]: E0904 15:46:39.878935 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:40.020743 systemd[1]: Created slice kubepods-besteffort-pod98a9b3c2_bd87_49de_849d_b1a3195b1b9f.slice - libcontainer container kubepods-besteffort-pod98a9b3c2_bd87_49de_849d_b1a3195b1b9f.slice. Sep 4 15:46:40.022579 containerd[1507]: time="2025-09-04T15:46:40.022546658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tbln,Uid:98a9b3c2-bd87-49de-849d-b1a3195b1b9f,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:40.062748 containerd[1507]: time="2025-09-04T15:46:40.062699071Z" level=error msg="Failed to destroy network for sandbox \"cfddd1ad4da5fe454a5bec549c4d0a0ac9765e4ea2c4fc761ef4a8f7b44a7999\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:40.063825 containerd[1507]: time="2025-09-04T15:46:40.063783170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tbln,Uid:98a9b3c2-bd87-49de-849d-b1a3195b1b9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfddd1ad4da5fe454a5bec549c4d0a0ac9765e4ea2c4fc761ef4a8f7b44a7999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:40.064068 kubelet[1848]: E0904 15:46:40.064030 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfddd1ad4da5fe454a5bec549c4d0a0ac9765e4ea2c4fc761ef4a8f7b44a7999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:40.064127 kubelet[1848]: E0904 15:46:40.064088 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfddd1ad4da5fe454a5bec549c4d0a0ac9765e4ea2c4fc761ef4a8f7b44a7999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:40.064127 kubelet[1848]: E0904 15:46:40.064108 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfddd1ad4da5fe454a5bec549c4d0a0ac9765e4ea2c4fc761ef4a8f7b44a7999\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4tbln" Sep 4 15:46:40.064178 kubelet[1848]: E0904 15:46:40.064149 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4tbln_calico-system(98a9b3c2-bd87-49de-849d-b1a3195b1b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4tbln_calico-system(98a9b3c2-bd87-49de-849d-b1a3195b1b9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfddd1ad4da5fe454a5bec549c4d0a0ac9765e4ea2c4fc761ef4a8f7b44a7999\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4tbln" podUID="98a9b3c2-bd87-49de-849d-b1a3195b1b9f" Sep 4 15:46:40.132481 systemd[1]: run-netns-cni\x2dfcbdae39\x2dc659\x2db3f8\x2d6bd7\x2d2d2c17d24286.mount: Deactivated successfully. Sep 4 15:46:40.132572 systemd[1]: run-netns-cni\x2d5a25cc1b\x2d62e3\x2d8fe8\x2dab56\x2d09671befef46.mount: Deactivated successfully. Sep 4 15:46:40.132625 systemd[1]: run-netns-cni\x2d6ef2c3f9\x2da305\x2d433d\x2d623f\x2d2f414e6cf2d2.mount: Deactivated successfully. Sep 4 15:46:40.132670 systemd[1]: run-netns-cni\x2d20ebe086\x2d7103\x2d61e5\x2d3ff4\x2df19940dff4b9.mount: Deactivated successfully. Sep 4 15:46:40.880027 kubelet[1848]: E0904 15:46:40.879977 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:41.868849 kubelet[1848]: E0904 15:46:41.868810 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:41.880290 kubelet[1848]: E0904 15:46:41.880252 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:42.880856 kubelet[1848]: E0904 15:46:42.880804 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:43.836516 systemd[1]: Created slice kubepods-besteffort-pod39c89dc6_66bc_4c67_bbfc_64965a9657ac.slice - libcontainer container kubepods-besteffort-pod39c89dc6_66bc_4c67_bbfc_64965a9657ac.slice. Sep 4 15:46:43.881152 kubelet[1848]: E0904 15:46:43.881110 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:43.923605 kubelet[1848]: I0904 15:46:43.923560 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzq2n\" (UniqueName: \"kubernetes.io/projected/39c89dc6-66bc-4c67-bbfc-64965a9657ac-kube-api-access-kzq2n\") pod \"nginx-deployment-7fcdb87857-nz2hk\" (UID: \"39c89dc6-66bc-4c67-bbfc-64965a9657ac\") " pod="default/nginx-deployment-7fcdb87857-nz2hk" Sep 4 15:46:44.139642 containerd[1507]: time="2025-09-04T15:46:44.139604835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nz2hk,Uid:39c89dc6-66bc-4c67-bbfc-64965a9657ac,Namespace:default,Attempt:0,}" Sep 4 15:46:44.180089 containerd[1507]: time="2025-09-04T15:46:44.180044270Z" level=error msg="Failed to destroy network for sandbox \"91b4190e9363e94a2e61acea1cffdaeb561e2b374832b590f1052af0455f9149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:44.181592 systemd[1]: run-netns-cni\x2d1a7b3854\x2d9c21\x2dc5aa\x2db93a\x2dda1807aaea9b.mount: Deactivated successfully. Sep 4 15:46:44.181743 containerd[1507]: time="2025-09-04T15:46:44.181701573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nz2hk,Uid:39c89dc6-66bc-4c67-bbfc-64965a9657ac,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b4190e9363e94a2e61acea1cffdaeb561e2b374832b590f1052af0455f9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:44.182605 kubelet[1848]: E0904 15:46:44.182556 1848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b4190e9363e94a2e61acea1cffdaeb561e2b374832b590f1052af0455f9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 15:46:44.182688 kubelet[1848]: E0904 15:46:44.182618 1848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b4190e9363e94a2e61acea1cffdaeb561e2b374832b590f1052af0455f9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nz2hk" Sep 4 15:46:44.182688 kubelet[1848]: E0904 15:46:44.182638 1848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b4190e9363e94a2e61acea1cffdaeb561e2b374832b590f1052af0455f9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-nz2hk" Sep 4 15:46:44.182738 kubelet[1848]: E0904 15:46:44.182682 1848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-nz2hk_default(39c89dc6-66bc-4c67-bbfc-64965a9657ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-nz2hk_default(39c89dc6-66bc-4c67-bbfc-64965a9657ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91b4190e9363e94a2e61acea1cffdaeb561e2b374832b590f1052af0455f9149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-nz2hk" podUID="39c89dc6-66bc-4c67-bbfc-64965a9657ac" Sep 4 15:46:44.881727 kubelet[1848]: E0904 15:46:44.881667 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:45.883333 kubelet[1848]: E0904 15:46:45.882731 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:46.883433 kubelet[1848]: E0904 15:46:46.883388 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:46.999862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516330794.mount: Deactivated successfully. Sep 4 15:46:47.288125 containerd[1507]: time="2025-09-04T15:46:47.287886269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:47.292747 containerd[1507]: time="2025-09-04T15:46:47.292684564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 4 15:46:47.293648 containerd[1507]: time="2025-09-04T15:46:47.293615054Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:47.295407 containerd[1507]: time="2025-09-04T15:46:47.295372340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:46:47.296162 containerd[1507]: time="2025-09-04T15:46:47.296122755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 8.240504653s" Sep 4 15:46:47.296193 containerd[1507]: time="2025-09-04T15:46:47.296160185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 4 15:46:47.303482 containerd[1507]: time="2025-09-04T15:46:47.303439027Z" level=info msg="CreateContainer within sandbox \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 15:46:47.311371 containerd[1507]: time="2025-09-04T15:46:47.311328558Z" level=info msg="Container 46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:46:47.313906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220042220.mount: Deactivated successfully. Sep 4 15:46:47.320399 containerd[1507]: time="2025-09-04T15:46:47.320362526Z" level=info msg="CreateContainer within sandbox \"c3f20ab3cbe223762784ac2c398ee8e5bc409d29283419706adacbdcf0a03981\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\"" Sep 4 15:46:47.320919 containerd[1507]: time="2025-09-04T15:46:47.320890635Z" level=info msg="StartContainer for \"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\"" Sep 4 15:46:47.325773 containerd[1507]: time="2025-09-04T15:46:47.325737358Z" level=info msg="connecting to shim 46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c" address="unix:///run/containerd/s/d8ff4f5ef3e1b084d499f98a542fdd9486389531733ac3097ad396a381f11de2" protocol=ttrpc version=3 Sep 4 15:46:47.344506 systemd[1]: Started cri-containerd-46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c.scope - libcontainer container 46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c. Sep 4 15:46:47.383557 containerd[1507]: time="2025-09-04T15:46:47.383508885Z" level=info msg="StartContainer for \"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\" returns successfully" Sep 4 15:46:47.490084 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 15:46:47.490447 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 15:46:47.743631 kubelet[1848]: I0904 15:46:47.743590 1848 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4241afa6-f0f9-4549-8438-e207627c5501-whisker-backend-key-pair\") pod \"4241afa6-f0f9-4549-8438-e207627c5501\" (UID: \"4241afa6-f0f9-4549-8438-e207627c5501\") " Sep 4 15:46:47.743631 kubelet[1848]: I0904 15:46:47.743633 1848 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqknb\" (UniqueName: \"kubernetes.io/projected/4241afa6-f0f9-4549-8438-e207627c5501-kube-api-access-hqknb\") pod \"4241afa6-f0f9-4549-8438-e207627c5501\" (UID: \"4241afa6-f0f9-4549-8438-e207627c5501\") " Sep 4 15:46:47.743798 kubelet[1848]: I0904 15:46:47.743660 1848 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4241afa6-f0f9-4549-8438-e207627c5501-whisker-ca-bundle\") pod \"4241afa6-f0f9-4549-8438-e207627c5501\" (UID: \"4241afa6-f0f9-4549-8438-e207627c5501\") " Sep 4 15:46:47.744122 kubelet[1848]: I0904 15:46:47.744003 1848 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4241afa6-f0f9-4549-8438-e207627c5501-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4241afa6-f0f9-4549-8438-e207627c5501" (UID: "4241afa6-f0f9-4549-8438-e207627c5501"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 15:46:47.746400 kubelet[1848]: I0904 15:46:47.746364 1848 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4241afa6-f0f9-4549-8438-e207627c5501-kube-api-access-hqknb" (OuterVolumeSpecName: "kube-api-access-hqknb") pod "4241afa6-f0f9-4549-8438-e207627c5501" (UID: "4241afa6-f0f9-4549-8438-e207627c5501"). InnerVolumeSpecName "kube-api-access-hqknb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 15:46:47.746825 kubelet[1848]: I0904 15:46:47.746800 1848 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4241afa6-f0f9-4549-8438-e207627c5501-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4241afa6-f0f9-4549-8438-e207627c5501" (UID: "4241afa6-f0f9-4549-8438-e207627c5501"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 15:46:47.844268 kubelet[1848]: I0904 15:46:47.844226 1848 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4241afa6-f0f9-4549-8438-e207627c5501-whisker-ca-bundle\") on node \"10.0.0.45\" DevicePath \"\"" Sep 4 15:46:47.844268 kubelet[1848]: I0904 15:46:47.844257 1848 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4241afa6-f0f9-4549-8438-e207627c5501-whisker-backend-key-pair\") on node \"10.0.0.45\" DevicePath \"\"" Sep 4 15:46:47.844268 kubelet[1848]: I0904 15:46:47.844269 1848 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqknb\" (UniqueName: \"kubernetes.io/projected/4241afa6-f0f9-4549-8438-e207627c5501-kube-api-access-hqknb\") on node \"10.0.0.45\" DevicePath \"\"" Sep 4 15:46:47.883763 kubelet[1848]: E0904 15:46:47.883721 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:48.001720 systemd[1]: var-lib-kubelet-pods-4241afa6\x2df0f9\x2d4549\x2d8438\x2de207627c5501-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhqknb.mount: Deactivated successfully. Sep 4 15:46:48.001816 systemd[1]: var-lib-kubelet-pods-4241afa6\x2df0f9\x2d4549\x2d8438\x2de207627c5501-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 15:46:48.021530 systemd[1]: Removed slice kubepods-besteffort-pod4241afa6_f0f9_4549_8438_e207627c5501.slice - libcontainer container kubepods-besteffort-pod4241afa6_f0f9_4549_8438_e207627c5501.slice. Sep 4 15:46:48.202698 kubelet[1848]: I0904 15:46:48.202630 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bz5sb" podStartSLOduration=3.7544371659999998 podStartE2EDuration="26.202613029s" podCreationTimestamp="2025-09-04 15:46:22 +0000 UTC" firstStartedPulling="2025-09-04 15:46:24.848665914 +0000 UTC m=+3.904956239" lastFinishedPulling="2025-09-04 15:46:47.296841777 +0000 UTC m=+26.353132102" observedRunningTime="2025-09-04 15:46:48.201769344 +0000 UTC m=+27.258059669" watchObservedRunningTime="2025-09-04 15:46:48.202613029 +0000 UTC m=+27.258903354" Sep 4 15:46:48.224935 systemd[1]: Created slice kubepods-besteffort-pod7f949e1b_b9a9_4255_8bc8_3fc78089be68.slice - libcontainer container kubepods-besteffort-pod7f949e1b_b9a9_4255_8bc8_3fc78089be68.slice. Sep 4 15:46:48.347129 kubelet[1848]: I0904 15:46:48.346983 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2g6s\" (UniqueName: \"kubernetes.io/projected/7f949e1b-b9a9-4255-8bc8-3fc78089be68-kube-api-access-x2g6s\") pod \"whisker-66bd4fb94d-w5l74\" (UID: \"7f949e1b-b9a9-4255-8bc8-3fc78089be68\") " pod="calico-system/whisker-66bd4fb94d-w5l74" Sep 4 15:46:48.347129 kubelet[1848]: I0904 15:46:48.347031 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f949e1b-b9a9-4255-8bc8-3fc78089be68-whisker-ca-bundle\") pod \"whisker-66bd4fb94d-w5l74\" (UID: \"7f949e1b-b9a9-4255-8bc8-3fc78089be68\") " pod="calico-system/whisker-66bd4fb94d-w5l74" Sep 4 15:46:48.347417 kubelet[1848]: I0904 15:46:48.347398 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f949e1b-b9a9-4255-8bc8-3fc78089be68-whisker-backend-key-pair\") pod \"whisker-66bd4fb94d-w5l74\" (UID: \"7f949e1b-b9a9-4255-8bc8-3fc78089be68\") " pod="calico-system/whisker-66bd4fb94d-w5l74" Sep 4 15:46:48.528285 containerd[1507]: time="2025-09-04T15:46:48.528185022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66bd4fb94d-w5l74,Uid:7f949e1b-b9a9-4255-8bc8-3fc78089be68,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:48.642456 systemd-networkd[1427]: calid474a4efe69: Link UP Sep 4 15:46:48.642885 systemd-networkd[1427]: calid474a4efe69: Gained carrier Sep 4 15:46:48.652871 containerd[1507]: 2025-09-04 15:46:48.546 [INFO][2658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 15:46:48.652871 containerd[1507]: 2025-09-04 15:46:48.563 [INFO][2658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0 whisker-66bd4fb94d- calico-system 7f949e1b-b9a9-4255-8bc8-3fc78089be68 1050 0 2025-09-04 15:46:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66bd4fb94d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 10.0.0.45 whisker-66bd4fb94d-w5l74 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid474a4efe69 [] [] }} ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-" Sep 4 15:46:48.652871 containerd[1507]: 2025-09-04 15:46:48.563 [INFO][2658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.652871 containerd[1507]: 2025-09-04 15:46:48.601 [INFO][2673] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" HandleID="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Workload="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.601 [INFO][2673] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" HandleID="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Workload="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.45", "pod":"whisker-66bd4fb94d-w5l74", "timestamp":"2025-09-04 15:46:48.601034949 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.601 [INFO][2673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.601 [INFO][2673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.601 [INFO][2673] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.611 [INFO][2673] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" host="10.0.0.45" Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.616 [INFO][2673] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.620 [INFO][2673] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.622 [INFO][2673] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.625 [INFO][2673] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:48.653050 containerd[1507]: 2025-09-04 15:46:48.625 [INFO][2673] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" host="10.0.0.45" Sep 4 15:46:48.653238 containerd[1507]: 2025-09-04 15:46:48.626 [INFO][2673] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261 Sep 4 15:46:48.653238 containerd[1507]: 2025-09-04 15:46:48.629 [INFO][2673] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" host="10.0.0.45" Sep 4 15:46:48.653238 containerd[1507]: 2025-09-04 15:46:48.634 [INFO][2673] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.129/26] block=192.168.34.128/26 handle="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" host="10.0.0.45" Sep 4 15:46:48.653238 containerd[1507]: 2025-09-04 15:46:48.634 [INFO][2673] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.129/26] handle="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" host="10.0.0.45" Sep 4 15:46:48.653238 containerd[1507]: 2025-09-04 15:46:48.634 [INFO][2673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:48.653238 containerd[1507]: 2025-09-04 15:46:48.634 [INFO][2673] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.129/26] IPv6=[] ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" HandleID="k8s-pod-network.07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Workload="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.653362 containerd[1507]: 2025-09-04 15:46:48.637 [INFO][2658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0", GenerateName:"whisker-66bd4fb94d-", Namespace:"calico-system", SelfLink:"", UID:"7f949e1b-b9a9-4255-8bc8-3fc78089be68", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66bd4fb94d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"whisker-66bd4fb94d-w5l74", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid474a4efe69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:48.653362 containerd[1507]: 2025-09-04 15:46:48.637 [INFO][2658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.129/32] ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.653433 containerd[1507]: 2025-09-04 15:46:48.637 [INFO][2658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid474a4efe69 ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.653433 containerd[1507]: 2025-09-04 15:46:48.642 [INFO][2658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.653471 containerd[1507]: 2025-09-04 15:46:48.643 [INFO][2658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0", GenerateName:"whisker-66bd4fb94d-", Namespace:"calico-system", SelfLink:"", UID:"7f949e1b-b9a9-4255-8bc8-3fc78089be68", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66bd4fb94d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261", Pod:"whisker-66bd4fb94d-w5l74", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid474a4efe69", MAC:"d2:fe:85:48:6e:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:48.653514 containerd[1507]: 2025-09-04 15:46:48.651 [INFO][2658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" Namespace="calico-system" Pod="whisker-66bd4fb94d-w5l74" WorkloadEndpoint="10.0.0.45-k8s-whisker--66bd4fb94d--w5l74-eth0" Sep 4 15:46:48.669433 containerd[1507]: time="2025-09-04T15:46:48.669396476Z" level=info msg="connecting to shim 07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261" address="unix:///run/containerd/s/0c8dd6b31d32467deb9993154dff753ea0a99b54f00068eae21c9faf4d8dd752" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:48.693477 systemd[1]: Started cri-containerd-07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261.scope - libcontainer container 07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261. Sep 4 15:46:48.702775 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:48.721838 containerd[1507]: time="2025-09-04T15:46:48.721804897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66bd4fb94d-w5l74,Uid:7f949e1b-b9a9-4255-8bc8-3fc78089be68,Namespace:calico-system,Attempt:0,} returns sandbox id \"07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261\"" Sep 4 15:46:48.723202 containerd[1507]: time="2025-09-04T15:46:48.723178619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 15:46:48.884445 kubelet[1848]: E0904 15:46:48.884410 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:49.184566 containerd[1507]: time="2025-09-04T15:46:49.184507822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\" id:\"16b0a0b2e300b890592b17390ef6e728ae75009eb678be1b9163e1ac19991a75\" pid:2845 exit_status:1 exited_at:{seconds:1757000809 nanos:184073636}" Sep 4 15:46:49.352960 systemd-networkd[1427]: vxlan.calico: Link UP Sep 4 15:46:49.352967 systemd-networkd[1427]: vxlan.calico: Gained carrier Sep 4 15:46:49.732553 systemd-networkd[1427]: calid474a4efe69: Gained IPv6LL Sep 4 15:46:49.884980 kubelet[1848]: E0904 15:46:49.884933 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:50.017454 kubelet[1848]: I0904 15:46:50.017343 1848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4241afa6-f0f9-4549-8438-e207627c5501" path="/var/lib/kubelet/pods/4241afa6-f0f9-4549-8438-e207627c5501/volumes" Sep 4 15:46:50.154662 containerd[1507]: time="2025-09-04T15:46:50.154609963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\" id:\"1c2582e67cbd9f544752322b3667b5716aa0dffa70f817f3dfb155be6fa7a552\" pid:2979 exit_status:1 exited_at:{seconds:1757000810 nanos:154274391}" Sep 4 15:46:50.885411 kubelet[1848]: E0904 15:46:50.885363 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:51.016020 containerd[1507]: time="2025-09-04T15:46:51.015916183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-92gzs,Uid:eee38ed2-b930-4b4f-a878-aa34375eb46d,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:51.016020 containerd[1507]: time="2025-09-04T15:46:51.015945138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-p879q,Uid:178e1fbe-3648-4cc5-ab1e-eff6183d2fa0,Namespace:calico-apiserver,Attempt:0,}" Sep 4 15:46:51.020950 kubelet[1848]: E0904 15:46:51.020608 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:51.021205 containerd[1507]: time="2025-09-04T15:46:51.021168701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zf8sn,Uid:8278b82f-9e5a-4962-981e-5deac349f2fe,Namespace:kube-system,Attempt:0,}" Sep 4 15:46:51.040993 kubelet[1848]: E0904 15:46:51.040965 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:51.041881 containerd[1507]: time="2025-09-04T15:46:51.041845515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jwndq,Uid:38c417fb-b280-4f0a-ba6b-ae890b1ccfc3,Namespace:kube-system,Attempt:0,}" Sep 4 15:46:51.161208 systemd-networkd[1427]: calib8d34343ca0: Link UP Sep 4 15:46:51.161844 systemd-networkd[1427]: calib8d34343ca0: Gained carrier Sep 4 15:46:51.174156 containerd[1507]: 2025-09-04 15:46:51.085 [INFO][2995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0 goldmane-54d579b49d- calico-system eee38ed2-b930-4b4f-a878-aa34375eb46d 951 0 2025-09-04 15:46:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 10.0.0.45 goldmane-54d579b49d-92gzs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib8d34343ca0 [] [] }} ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-" Sep 4 15:46:51.174156 containerd[1507]: 2025-09-04 15:46:51.085 [INFO][2995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.174156 containerd[1507]: 2025-09-04 15:46:51.117 [INFO][3061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" HandleID="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Workload="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.117 [INFO][3061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" HandleID="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Workload="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000119770), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.45", "pod":"goldmane-54d579b49d-92gzs", "timestamp":"2025-09-04 15:46:51.117822736 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.118 [INFO][3061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.118 [INFO][3061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.118 [INFO][3061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.129 [INFO][3061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" host="10.0.0.45" Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.134 [INFO][3061] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.140 [INFO][3061] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.142 [INFO][3061] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.144 [INFO][3061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.174630 containerd[1507]: 2025-09-04 15:46:51.144 [INFO][3061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" host="10.0.0.45" Sep 4 15:46:51.174923 containerd[1507]: 2025-09-04 15:46:51.146 [INFO][3061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f Sep 4 15:46:51.174923 containerd[1507]: 2025-09-04 15:46:51.150 [INFO][3061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" host="10.0.0.45" Sep 4 15:46:51.174923 containerd[1507]: 2025-09-04 15:46:51.156 [INFO][3061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.130/26] block=192.168.34.128/26 handle="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" host="10.0.0.45" Sep 4 15:46:51.174923 containerd[1507]: 2025-09-04 15:46:51.156 [INFO][3061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.130/26] handle="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" host="10.0.0.45" Sep 4 15:46:51.174923 containerd[1507]: 2025-09-04 15:46:51.156 [INFO][3061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:51.174923 containerd[1507]: 2025-09-04 15:46:51.156 [INFO][3061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.130/26] IPv6=[] ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" HandleID="k8s-pod-network.af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Workload="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.175080 containerd[1507]: 2025-09-04 15:46:51.158 [INFO][2995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"eee38ed2-b930-4b4f-a878-aa34375eb46d", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"goldmane-54d579b49d-92gzs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib8d34343ca0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.175080 containerd[1507]: 2025-09-04 15:46:51.158 [INFO][2995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.130/32] ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.175180 containerd[1507]: 2025-09-04 15:46:51.158 [INFO][2995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8d34343ca0 ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.175180 containerd[1507]: 2025-09-04 15:46:51.163 [INFO][2995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.175245 containerd[1507]: 2025-09-04 15:46:51.164 [INFO][2995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"eee38ed2-b930-4b4f-a878-aa34375eb46d", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f", Pod:"goldmane-54d579b49d-92gzs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib8d34343ca0", MAC:"b2:af:02:9c:24:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.175321 containerd[1507]: 2025-09-04 15:46:51.172 [INFO][2995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" Namespace="calico-system" Pod="goldmane-54d579b49d-92gzs" WorkloadEndpoint="10.0.0.45-k8s-goldmane--54d579b49d--92gzs-eth0" Sep 4 15:46:51.192562 containerd[1507]: time="2025-09-04T15:46:51.192513162Z" level=info msg="connecting to shim af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f" address="unix:///run/containerd/s/44af61bb0ad661f314f4750e755457ef5057145040ca65410b1f77cf96a6468e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:51.204420 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Sep 4 15:46:51.216595 systemd[1]: Started cri-containerd-af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f.scope - libcontainer container af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f. Sep 4 15:46:51.228285 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:51.248582 containerd[1507]: time="2025-09-04T15:46:51.248535751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-92gzs,Uid:eee38ed2-b930-4b4f-a878-aa34375eb46d,Namespace:calico-system,Attempt:0,} returns sandbox id \"af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f\"" Sep 4 15:46:51.264096 systemd-networkd[1427]: calif0649e5f981: Link UP Sep 4 15:46:51.264274 systemd-networkd[1427]: calif0649e5f981: Gained carrier Sep 4 15:46:51.275062 containerd[1507]: 2025-09-04 15:46:51.090 [INFO][3018] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0 coredns-668d6bf9bc- kube-system 8278b82f-9e5a-4962-981e-5deac349f2fe 947 0 2025-09-04 15:45:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.45 coredns-668d6bf9bc-zf8sn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0649e5f981 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-" Sep 4 15:46:51.275062 containerd[1507]: 2025-09-04 15:46:51.090 [INFO][3018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.275062 containerd[1507]: 2025-09-04 15:46:51.121 [INFO][3069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" HandleID="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Workload="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.121 [INFO][3069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" HandleID="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Workload="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d450), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.45", "pod":"coredns-668d6bf9bc-zf8sn", "timestamp":"2025-09-04 15:46:51.121765783 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.121 [INFO][3069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.156 [INFO][3069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.156 [INFO][3069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.230 [INFO][3069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" host="10.0.0.45" Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.235 [INFO][3069] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.243 [INFO][3069] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.245 [INFO][3069] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.247 [INFO][3069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.275294 containerd[1507]: 2025-09-04 15:46:51.247 [INFO][3069] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" host="10.0.0.45" Sep 4 15:46:51.275581 containerd[1507]: 2025-09-04 15:46:51.249 [INFO][3069] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af Sep 4 15:46:51.275581 containerd[1507]: 2025-09-04 15:46:51.254 [INFO][3069] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" host="10.0.0.45" Sep 4 15:46:51.275581 containerd[1507]: 2025-09-04 15:46:51.259 [INFO][3069] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.131/26] block=192.168.34.128/26 handle="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" host="10.0.0.45" Sep 4 15:46:51.275581 containerd[1507]: 2025-09-04 15:46:51.259 [INFO][3069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.131/26] handle="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" host="10.0.0.45" Sep 4 15:46:51.275581 containerd[1507]: 2025-09-04 15:46:51.259 [INFO][3069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:51.275581 containerd[1507]: 2025-09-04 15:46:51.259 [INFO][3069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.131/26] IPv6=[] ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" HandleID="k8s-pod-network.ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Workload="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.275720 containerd[1507]: 2025-09-04 15:46:51.261 [INFO][3018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8278b82f-9e5a-4962-981e-5deac349f2fe", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"coredns-668d6bf9bc-zf8sn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0649e5f981", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.275958 containerd[1507]: 2025-09-04 15:46:51.261 [INFO][3018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.131/32] ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.275958 containerd[1507]: 2025-09-04 15:46:51.261 [INFO][3018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0649e5f981 ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.275958 containerd[1507]: 2025-09-04 15:46:51.264 [INFO][3018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.276122 containerd[1507]: 2025-09-04 15:46:51.265 [INFO][3018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8278b82f-9e5a-4962-981e-5deac349f2fe", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af", Pod:"coredns-668d6bf9bc-zf8sn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0649e5f981", MAC:"1a:37:d3:19:fb:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.276122 containerd[1507]: 2025-09-04 15:46:51.273 [INFO][3018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" Namespace="kube-system" Pod="coredns-668d6bf9bc-zf8sn" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--zf8sn-eth0" Sep 4 15:46:51.294534 containerd[1507]: time="2025-09-04T15:46:51.294474025Z" level=info msg="connecting to shim ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af" address="unix:///run/containerd/s/2f096dd02d6dc593cc6a1390890b7cabba828af81c9be1fea943e8b3c30cd546" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:51.324523 systemd[1]: Started cri-containerd-ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af.scope - libcontainer container ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af. Sep 4 15:46:51.335049 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:51.357630 containerd[1507]: time="2025-09-04T15:46:51.357571303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zf8sn,Uid:8278b82f-9e5a-4962-981e-5deac349f2fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af\"" Sep 4 15:46:51.359158 kubelet[1848]: E0904 15:46:51.358684 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:51.367029 systemd-networkd[1427]: cali826e68e31ea: Link UP Sep 4 15:46:51.367167 systemd-networkd[1427]: cali826e68e31ea: Gained carrier Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.084 [INFO][3000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0 calico-apiserver-857d4f49b9- calico-apiserver 178e1fbe-3648-4cc5-ab1e-eff6183d2fa0 950 0 2025-09-04 15:45:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857d4f49b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.45 calico-apiserver-857d4f49b9-p879q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali826e68e31ea [] [] }} ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.084 [INFO][3000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.120 [INFO][3055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" HandleID="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Workload="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.120 [INFO][3055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" HandleID="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Workload="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.45", "pod":"calico-apiserver-857d4f49b9-p879q", "timestamp":"2025-09-04 15:46:51.120195043 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.122 [INFO][3055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.259 [INFO][3055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.259 [INFO][3055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.330 [INFO][3055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.337 [INFO][3055] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.343 [INFO][3055] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.345 [INFO][3055] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.348 [INFO][3055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.348 [INFO][3055] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.350 [INFO][3055] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.355 [INFO][3055] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.362 [INFO][3055] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.132/26] block=192.168.34.128/26 handle="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.362 [INFO][3055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.132/26] handle="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" host="10.0.0.45" Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.362 [INFO][3055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:51.379358 containerd[1507]: 2025-09-04 15:46:51.363 [INFO][3055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.132/26] IPv6=[] ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" HandleID="k8s-pod-network.a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Workload="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.379991 containerd[1507]: 2025-09-04 15:46:51.365 [INFO][3000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0", GenerateName:"calico-apiserver-857d4f49b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"178e1fbe-3648-4cc5-ab1e-eff6183d2fa0", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857d4f49b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"calico-apiserver-857d4f49b9-p879q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali826e68e31ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.379991 containerd[1507]: 2025-09-04 15:46:51.365 [INFO][3000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.132/32] ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.379991 containerd[1507]: 2025-09-04 15:46:51.365 [INFO][3000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali826e68e31ea ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.379991 containerd[1507]: 2025-09-04 15:46:51.367 [INFO][3000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.379991 containerd[1507]: 2025-09-04 15:46:51.368 [INFO][3000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0", GenerateName:"calico-apiserver-857d4f49b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"178e1fbe-3648-4cc5-ab1e-eff6183d2fa0", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857d4f49b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af", Pod:"calico-apiserver-857d4f49b9-p879q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali826e68e31ea", MAC:"c2:06:26:b6:4c:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.379991 containerd[1507]: 2025-09-04 15:46:51.377 [INFO][3000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-p879q" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--p879q-eth0" Sep 4 15:46:51.397957 containerd[1507]: time="2025-09-04T15:46:51.397874292Z" level=info msg="connecting to shim a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af" address="unix:///run/containerd/s/5cb1002f8205b8f00d057e568779836608ff4a9579f876af1e8fcc2ef86b46be" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:51.420513 systemd[1]: Started cri-containerd-a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af.scope - libcontainer container a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af. Sep 4 15:46:51.431843 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:51.453269 containerd[1507]: time="2025-09-04T15:46:51.452859679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-p879q,Uid:178e1fbe-3648-4cc5-ab1e-eff6183d2fa0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af\"" Sep 4 15:46:51.466560 systemd-networkd[1427]: cali624e8c5d90f: Link UP Sep 4 15:46:51.466755 systemd-networkd[1427]: cali624e8c5d90f: Gained carrier Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.101 [INFO][3031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0 coredns-668d6bf9bc- kube-system 38c417fb-b280-4f0a-ba6b-ae890b1ccfc3 945 0 2025-09-04 15:45:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.45 coredns-668d6bf9bc-jwndq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali624e8c5d90f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.102 [INFO][3031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.134 [INFO][3076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" HandleID="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Workload="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.135 [INFO][3076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" HandleID="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Workload="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b220), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.45", "pod":"coredns-668d6bf9bc-jwndq", "timestamp":"2025-09-04 15:46:51.134850726 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.135 [INFO][3076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.363 [INFO][3076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.364 [INFO][3076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.431 [INFO][3076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.438 [INFO][3076] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.443 [INFO][3076] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.445 [INFO][3076] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.448 [INFO][3076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.448 [INFO][3076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.450 [INFO][3076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.455 [INFO][3076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.462 [INFO][3076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.133/26] block=192.168.34.128/26 handle="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.462 [INFO][3076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.133/26] handle="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" host="10.0.0.45" Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.462 [INFO][3076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:51.478394 containerd[1507]: 2025-09-04 15:46:51.462 [INFO][3076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.133/26] IPv6=[] ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" HandleID="k8s-pod-network.e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Workload="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.478862 containerd[1507]: 2025-09-04 15:46:51.464 [INFO][3031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"38c417fb-b280-4f0a-ba6b-ae890b1ccfc3", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"coredns-668d6bf9bc-jwndq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali624e8c5d90f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.478862 containerd[1507]: 2025-09-04 15:46:51.464 [INFO][3031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.133/32] ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.478862 containerd[1507]: 2025-09-04 15:46:51.464 [INFO][3031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali624e8c5d90f ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.478862 containerd[1507]: 2025-09-04 15:46:51.466 [INFO][3031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.478862 containerd[1507]: 2025-09-04 15:46:51.466 [INFO][3031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"38c417fb-b280-4f0a-ba6b-ae890b1ccfc3", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b", Pod:"coredns-668d6bf9bc-jwndq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali624e8c5d90f", MAC:"6e:7c:16:92:81:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:51.478862 containerd[1507]: 2025-09-04 15:46:51.476 [INFO][3031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" Namespace="kube-system" Pod="coredns-668d6bf9bc-jwndq" WorkloadEndpoint="10.0.0.45-k8s-coredns--668d6bf9bc--jwndq-eth0" Sep 4 15:46:51.498512 containerd[1507]: time="2025-09-04T15:46:51.498442500Z" level=info msg="connecting to shim e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b" address="unix:///run/containerd/s/f39c4e197c99ea949bd1a2e805d932981fa30811e7ad0411ac7e9ef2e1c9866b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:51.525517 systemd[1]: Started cri-containerd-e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b.scope - libcontainer container e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b. Sep 4 15:46:51.535227 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:51.553786 containerd[1507]: time="2025-09-04T15:46:51.553739628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jwndq,Uid:38c417fb-b280-4f0a-ba6b-ae890b1ccfc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b\"" Sep 4 15:46:51.554432 kubelet[1848]: E0904 15:46:51.554407 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:46:51.886131 kubelet[1848]: E0904 15:46:51.886087 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:52.016690 containerd[1507]: time="2025-09-04T15:46:52.016418224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbb789b-bbg75,Uid:5580c510-f994-4e0d-b8b6-f1d1fdc65ef4,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:52.119978 systemd-networkd[1427]: calif52189ec681: Link UP Sep 4 15:46:52.120729 systemd-networkd[1427]: calif52189ec681: Gained carrier Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.054 [INFO][3314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0 calico-kube-controllers-68dbb789b- calico-system 5580c510-f994-4e0d-b8b6-f1d1fdc65ef4 946 0 2025-09-04 15:46:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68dbb789b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.45 calico-kube-controllers-68dbb789b-bbg75 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif52189ec681 [] [] }} ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.054 [INFO][3314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.078 [INFO][3328] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" HandleID="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Workload="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.078 [INFO][3328] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" HandleID="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Workload="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000128f80), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.45", "pod":"calico-kube-controllers-68dbb789b-bbg75", "timestamp":"2025-09-04 15:46:52.078748112 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.078 [INFO][3328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.079 [INFO][3328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.079 [INFO][3328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.090 [INFO][3328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.095 [INFO][3328] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.099 [INFO][3328] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.101 [INFO][3328] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.104 [INFO][3328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.104 [INFO][3328] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.106 [INFO][3328] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354 Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.109 [INFO][3328] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.115 [INFO][3328] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.134/26] block=192.168.34.128/26 handle="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.115 [INFO][3328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.134/26] handle="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" host="10.0.0.45" Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.115 [INFO][3328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:52.132781 containerd[1507]: 2025-09-04 15:46:52.115 [INFO][3328] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.134/26] IPv6=[] ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" HandleID="k8s-pod-network.66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Workload="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.133685 containerd[1507]: 2025-09-04 15:46:52.117 [INFO][3314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0", GenerateName:"calico-kube-controllers-68dbb789b-", Namespace:"calico-system", SelfLink:"", UID:"5580c510-f994-4e0d-b8b6-f1d1fdc65ef4", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68dbb789b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"calico-kube-controllers-68dbb789b-bbg75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif52189ec681", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:52.133685 containerd[1507]: 2025-09-04 15:46:52.117 [INFO][3314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.134/32] ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.133685 containerd[1507]: 2025-09-04 15:46:52.117 [INFO][3314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif52189ec681 ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.133685 containerd[1507]: 2025-09-04 15:46:52.119 [INFO][3314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.133685 containerd[1507]: 2025-09-04 15:46:52.121 [INFO][3314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0", GenerateName:"calico-kube-controllers-68dbb789b-", Namespace:"calico-system", SelfLink:"", UID:"5580c510-f994-4e0d-b8b6-f1d1fdc65ef4", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68dbb789b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354", Pod:"calico-kube-controllers-68dbb789b-bbg75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif52189ec681", MAC:"ca:57:de:02:ea:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:52.133685 containerd[1507]: 2025-09-04 15:46:52.130 [INFO][3314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" Namespace="calico-system" Pod="calico-kube-controllers-68dbb789b-bbg75" WorkloadEndpoint="10.0.0.45-k8s-calico--kube--controllers--68dbb789b--bbg75-eth0" Sep 4 15:46:52.149848 containerd[1507]: time="2025-09-04T15:46:52.149739812Z" level=info msg="connecting to shim 66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354" address="unix:///run/containerd/s/9cee8f63ce10a0e5fba5134e12c18c0ced6bed568905b9401d8cd92c30ba20b4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:52.179546 systemd[1]: Started cri-containerd-66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354.scope - libcontainer container 66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354. Sep 4 15:46:52.189286 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:52.241931 containerd[1507]: time="2025-09-04T15:46:52.241894725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbb789b-bbg75,Uid:5580c510-f994-4e0d-b8b6-f1d1fdc65ef4,Namespace:calico-system,Attempt:0,} returns sandbox id \"66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354\"" Sep 4 15:46:52.292513 systemd-networkd[1427]: calib8d34343ca0: Gained IPv6LL Sep 4 15:46:52.887177 kubelet[1848]: E0904 15:46:52.887132 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:52.932523 systemd-networkd[1427]: cali624e8c5d90f: Gained IPv6LL Sep 4 15:46:52.996460 systemd-networkd[1427]: cali826e68e31ea: Gained IPv6LL Sep 4 15:46:53.016687 containerd[1507]: time="2025-09-04T15:46:53.016362496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tbln,Uid:98a9b3c2-bd87-49de-849d-b1a3195b1b9f,Namespace:calico-system,Attempt:0,}" Sep 4 15:46:53.061443 systemd-networkd[1427]: calif0649e5f981: Gained IPv6LL Sep 4 15:46:53.131094 systemd-networkd[1427]: cali97c19ab2c73: Link UP Sep 4 15:46:53.131278 systemd-networkd[1427]: cali97c19ab2c73: Gained carrier Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.062 [INFO][3393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-csi--node--driver--4tbln-eth0 csi-node-driver- calico-system 98a9b3c2-bd87-49de-849d-b1a3195b1b9f 851 0 2025-09-04 15:46:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.45 csi-node-driver-4tbln eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali97c19ab2c73 [] [] }} ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.062 [INFO][3393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.084 [INFO][3407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" HandleID="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Workload="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.084 [INFO][3407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" HandleID="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Workload="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c130), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.45", "pod":"csi-node-driver-4tbln", "timestamp":"2025-09-04 15:46:53.084350373 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.084 [INFO][3407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.084 [INFO][3407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.084 [INFO][3407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.098 [INFO][3407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.103 [INFO][3407] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.107 [INFO][3407] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.109 [INFO][3407] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.111 [INFO][3407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.111 [INFO][3407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.113 [INFO][3407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00 Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.116 [INFO][3407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.122 [INFO][3407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.135/26] block=192.168.34.128/26 handle="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.122 [INFO][3407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.135/26] handle="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" host="10.0.0.45" Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.122 [INFO][3407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:53.145010 containerd[1507]: 2025-09-04 15:46:53.122 [INFO][3407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.135/26] IPv6=[] ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" HandleID="k8s-pod-network.bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Workload="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.150941 containerd[1507]: 2025-09-04 15:46:53.124 [INFO][3393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-csi--node--driver--4tbln-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98a9b3c2-bd87-49de-849d-b1a3195b1b9f", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"csi-node-driver-4tbln", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali97c19ab2c73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:53.150941 containerd[1507]: 2025-09-04 15:46:53.125 [INFO][3393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.135/32] ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.150941 containerd[1507]: 2025-09-04 15:46:53.125 [INFO][3393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97c19ab2c73 ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.150941 containerd[1507]: 2025-09-04 15:46:53.130 [INFO][3393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.150941 containerd[1507]: 2025-09-04 15:46:53.134 [INFO][3393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-csi--node--driver--4tbln-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98a9b3c2-bd87-49de-849d-b1a3195b1b9f", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00", Pod:"csi-node-driver-4tbln", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali97c19ab2c73", MAC:"26:ba:57:df:f3:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:53.150941 containerd[1507]: 2025-09-04 15:46:53.142 [INFO][3393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" Namespace="calico-system" Pod="csi-node-driver-4tbln" WorkloadEndpoint="10.0.0.45-k8s-csi--node--driver--4tbln-eth0" Sep 4 15:46:53.166694 containerd[1507]: time="2025-09-04T15:46:53.166210763Z" level=info msg="connecting to shim bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00" address="unix:///run/containerd/s/a74e50d14038941f5d37de56c8ef823fa94875636f24667c8cb89a90ae2313bc" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:53.195240 systemd[1]: Started cri-containerd-bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00.scope - libcontainer container bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00. Sep 4 15:46:53.206784 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:53.217092 containerd[1507]: time="2025-09-04T15:46:53.217042038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tbln,Uid:98a9b3c2-bd87-49de-849d-b1a3195b1b9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00\"" Sep 4 15:46:53.444455 systemd-networkd[1427]: calif52189ec681: Gained IPv6LL Sep 4 15:46:53.887313 kubelet[1848]: E0904 15:46:53.887266 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:54.015807 containerd[1507]: time="2025-09-04T15:46:54.015725469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-497bn,Uid:39da1048-d7f6-4184-8f7e-1f20fd878471,Namespace:calico-apiserver,Attempt:0,}" Sep 4 15:46:54.109486 systemd-networkd[1427]: caliba910781537: Link UP Sep 4 15:46:54.109782 systemd-networkd[1427]: caliba910781537: Gained carrier Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.049 [INFO][3471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0 calico-apiserver-857d4f49b9- calico-apiserver 39da1048-d7f6-4184-8f7e-1f20fd878471 948 0 2025-09-04 15:45:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857d4f49b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.45 calico-apiserver-857d4f49b9-497bn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba910781537 [] [] }} ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.049 [INFO][3471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.070 [INFO][3485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" HandleID="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Workload="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.070 [INFO][3485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" HandleID="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Workload="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.45", "pod":"calico-apiserver-857d4f49b9-497bn", "timestamp":"2025-09-04 15:46:54.07073346 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.070 [INFO][3485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.070 [INFO][3485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.070 [INFO][3485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.081 [INFO][3485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.086 [INFO][3485] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.090 [INFO][3485] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.092 [INFO][3485] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.095 [INFO][3485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.095 [INFO][3485] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.096 [INFO][3485] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705 Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.099 [INFO][3485] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.105 [INFO][3485] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.136/26] block=192.168.34.128/26 handle="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.105 [INFO][3485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.136/26] handle="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" host="10.0.0.45" Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.105 [INFO][3485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:54.119854 containerd[1507]: 2025-09-04 15:46:54.105 [INFO][3485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.136/26] IPv6=[] ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" HandleID="k8s-pod-network.3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Workload="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.120652 containerd[1507]: 2025-09-04 15:46:54.107 [INFO][3471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0", GenerateName:"calico-apiserver-857d4f49b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"39da1048-d7f6-4184-8f7e-1f20fd878471", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857d4f49b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"calico-apiserver-857d4f49b9-497bn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba910781537", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:54.120652 containerd[1507]: 2025-09-04 15:46:54.107 [INFO][3471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.136/32] ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.120652 containerd[1507]: 2025-09-04 15:46:54.107 [INFO][3471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba910781537 ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.120652 containerd[1507]: 2025-09-04 15:46:54.110 [INFO][3471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.120652 containerd[1507]: 2025-09-04 15:46:54.110 [INFO][3471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0", GenerateName:"calico-apiserver-857d4f49b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"39da1048-d7f6-4184-8f7e-1f20fd878471", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857d4f49b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705", Pod:"calico-apiserver-857d4f49b9-497bn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba910781537", MAC:"22:b2:29:7c:a9:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:54.120652 containerd[1507]: 2025-09-04 15:46:54.117 [INFO][3471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" Namespace="calico-apiserver" Pod="calico-apiserver-857d4f49b9-497bn" WorkloadEndpoint="10.0.0.45-k8s-calico--apiserver--857d4f49b9--497bn-eth0" Sep 4 15:46:54.144252 containerd[1507]: time="2025-09-04T15:46:54.144110682Z" level=info msg="connecting to shim 3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705" address="unix:///run/containerd/s/1a6295a47a1720cfad8f463cdea084ee8b14b6658ad5e6ddebb2ede2e5f66d6c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:54.168487 systemd[1]: Started cri-containerd-3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705.scope - libcontainer container 3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705. Sep 4 15:46:54.180581 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:54.199522 containerd[1507]: time="2025-09-04T15:46:54.199483455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857d4f49b9-497bn,Uid:39da1048-d7f6-4184-8f7e-1f20fd878471,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705\"" Sep 4 15:46:54.340713 systemd-networkd[1427]: cali97c19ab2c73: Gained IPv6LL Sep 4 15:46:54.887553 kubelet[1848]: E0904 15:46:54.887504 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:55.876537 systemd-networkd[1427]: caliba910781537: Gained IPv6LL Sep 4 15:46:55.888687 kubelet[1848]: E0904 15:46:55.888641 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:56.016084 containerd[1507]: time="2025-09-04T15:46:56.016044757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nz2hk,Uid:39c89dc6-66bc-4c67-bbfc-64965a9657ac,Namespace:default,Attempt:0,}" Sep 4 15:46:56.123293 systemd-networkd[1427]: cali3cfc26492d4: Link UP Sep 4 15:46:56.124183 systemd-networkd[1427]: cali3cfc26492d4: Gained carrier Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.060 [INFO][3554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0 nginx-deployment-7fcdb87857- default 39c89dc6-66bc-4c67-bbfc-64965a9657ac 996 0 2025-09-04 15:46:43 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.45 nginx-deployment-7fcdb87857-nz2hk eth0 default [] [] [kns.default ksa.default.default] cali3cfc26492d4 [] [] }} ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.060 [INFO][3554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.082 [INFO][3570] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" HandleID="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Workload="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.082 [INFO][3570] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" HandleID="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Workload="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc300), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.45", "pod":"nginx-deployment-7fcdb87857-nz2hk", "timestamp":"2025-09-04 15:46:56.0821639 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.082 [INFO][3570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.082 [INFO][3570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.082 [INFO][3570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.093 [INFO][3570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.098 [INFO][3570] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.102 [INFO][3570] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.104 [INFO][3570] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.106 [INFO][3570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.106 [INFO][3570] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.108 [INFO][3570] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.112 [INFO][3570] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.118 [INFO][3570] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.137/26] block=192.168.34.128/26 handle="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.118 [INFO][3570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.137/26] handle="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" host="10.0.0.45" Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.118 [INFO][3570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:46:56.133841 containerd[1507]: 2025-09-04 15:46:56.118 [INFO][3570] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.137/26] IPv6=[] ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" HandleID="k8s-pod-network.1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Workload="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.134392 containerd[1507]: 2025-09-04 15:46:56.120 [INFO][3554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"39c89dc6-66bc-4c67-bbfc-64965a9657ac", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-nz2hk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3cfc26492d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:56.134392 containerd[1507]: 2025-09-04 15:46:56.120 [INFO][3554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.137/32] ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.134392 containerd[1507]: 2025-09-04 15:46:56.120 [INFO][3554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cfc26492d4 ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.134392 containerd[1507]: 2025-09-04 15:46:56.124 [INFO][3554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.134392 containerd[1507]: 2025-09-04 15:46:56.124 [INFO][3554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"39c89dc6-66bc-4c67-bbfc-64965a9657ac", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e", Pod:"nginx-deployment-7fcdb87857-nz2hk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3cfc26492d4", MAC:"6a:f2:f4:0e:de:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:46:56.134392 containerd[1507]: 2025-09-04 15:46:56.131 [INFO][3554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" Namespace="default" Pod="nginx-deployment-7fcdb87857-nz2hk" WorkloadEndpoint="10.0.0.45-k8s-nginx--deployment--7fcdb87857--nz2hk-eth0" Sep 4 15:46:56.157853 containerd[1507]: time="2025-09-04T15:46:56.157633271Z" level=info msg="connecting to shim 1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e" address="unix:///run/containerd/s/1a104dc12d8401c9af03418f2add981fb44e4c30f21832e27316cbd704b1aeec" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:46:56.182475 systemd[1]: Started cri-containerd-1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e.scope - libcontainer container 1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e. Sep 4 15:46:56.192738 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:46:56.211263 containerd[1507]: time="2025-09-04T15:46:56.211215866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nz2hk,Uid:39c89dc6-66bc-4c67-bbfc-64965a9657ac,Namespace:default,Attempt:0,} returns sandbox id \"1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e\"" Sep 4 15:46:56.889712 kubelet[1848]: E0904 15:46:56.889667 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:57.558829 update_engine[1495]: I20250904 15:46:57.558696 1495 update_attempter.cc:509] Updating boot flags... Sep 4 15:46:57.604707 systemd-networkd[1427]: cali3cfc26492d4: Gained IPv6LL Sep 4 15:46:57.890327 kubelet[1848]: E0904 15:46:57.890249 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:58.890841 kubelet[1848]: E0904 15:46:58.890796 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:46:59.891142 kubelet[1848]: E0904 15:46:59.891072 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:00.336033 containerd[1507]: time="2025-09-04T15:47:00.335483597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:00.337466 containerd[1507]: time="2025-09-04T15:47:00.337439789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 4 15:47:00.338650 containerd[1507]: time="2025-09-04T15:47:00.338620863Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:00.341149 containerd[1507]: time="2025-09-04T15:47:00.341116916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:00.341786 containerd[1507]: time="2025-09-04T15:47:00.341763887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 11.618556954s" Sep 4 15:47:00.341828 containerd[1507]: time="2025-09-04T15:47:00.341791964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 4 15:47:00.343954 containerd[1507]: time="2025-09-04T15:47:00.343912298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 15:47:00.344935 containerd[1507]: time="2025-09-04T15:47:00.344899992Z" level=info msg="CreateContainer within sandbox \"07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 15:47:00.352031 containerd[1507]: time="2025-09-04T15:47:00.351971157Z" level=info msg="Container 8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:00.359325 containerd[1507]: time="2025-09-04T15:47:00.359263379Z" level=info msg="CreateContainer within sandbox \"07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f\"" Sep 4 15:47:00.359963 containerd[1507]: time="2025-09-04T15:47:00.359928868Z" level=info msg="StartContainer for \"8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f\"" Sep 4 15:47:00.360929 containerd[1507]: time="2025-09-04T15:47:00.360892805Z" level=info msg="connecting to shim 8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f" address="unix:///run/containerd/s/0c8dd6b31d32467deb9993154dff753ea0a99b54f00068eae21c9faf4d8dd752" protocol=ttrpc version=3 Sep 4 15:47:00.379483 systemd[1]: Started cri-containerd-8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f.scope - libcontainer container 8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f. Sep 4 15:47:00.413513 containerd[1507]: time="2025-09-04T15:47:00.413465273Z" level=info msg="StartContainer for \"8694335b7ad6265ccdef713bb37d1eba921789d306a588ad2997e467a012bd6f\" returns successfully" Sep 4 15:47:00.891567 kubelet[1848]: E0904 15:47:00.891517 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:01.869322 kubelet[1848]: E0904 15:47:01.869256 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:01.891836 kubelet[1848]: E0904 15:47:01.891783 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:02.892730 kubelet[1848]: E0904 15:47:02.892683 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:03.893418 kubelet[1848]: E0904 15:47:03.893344 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:04.894176 kubelet[1848]: E0904 15:47:04.894097 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:05.894748 kubelet[1848]: E0904 15:47:05.894704 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:06.895857 kubelet[1848]: E0904 15:47:06.895799 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:07.896959 kubelet[1848]: E0904 15:47:07.896913 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:08.897119 kubelet[1848]: E0904 15:47:08.897068 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:09.898254 kubelet[1848]: E0904 15:47:09.898196 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:10.898625 kubelet[1848]: E0904 15:47:10.898581 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:11.899666 kubelet[1848]: E0904 15:47:11.899606 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:12.608678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659461721.mount: Deactivated successfully. Sep 4 15:47:12.845699 containerd[1507]: time="2025-09-04T15:47:12.845646874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:12.846046 containerd[1507]: time="2025-09-04T15:47:12.846025375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 4 15:47:12.849494 containerd[1507]: time="2025-09-04T15:47:12.849461006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 12.505510191s" Sep 4 15:47:12.849494 containerd[1507]: time="2025-09-04T15:47:12.849497844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 4 15:47:12.850687 containerd[1507]: time="2025-09-04T15:47:12.850632428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 15:47:12.852210 containerd[1507]: time="2025-09-04T15:47:12.852166553Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:12.852689 containerd[1507]: time="2025-09-04T15:47:12.852498656Z" level=info msg="CreateContainer within sandbox \"af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 15:47:12.852876 containerd[1507]: time="2025-09-04T15:47:12.852845879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:12.859206 containerd[1507]: time="2025-09-04T15:47:12.858462003Z" level=info msg="Container 1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:12.867726 containerd[1507]: time="2025-09-04T15:47:12.867688869Z" level=info msg="CreateContainer within sandbox \"af4e5da247fc7b1e247a0b84f34e86750f418fb94788386f20a1a0b15be4b98f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\"" Sep 4 15:47:12.868299 containerd[1507]: time="2025-09-04T15:47:12.868271560Z" level=info msg="StartContainer for \"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\"" Sep 4 15:47:12.869262 containerd[1507]: time="2025-09-04T15:47:12.869235073Z" level=info msg="connecting to shim 1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6" address="unix:///run/containerd/s/44af61bb0ad661f314f4750e755457ef5057145040ca65410b1f77cf96a6468e" protocol=ttrpc version=3 Sep 4 15:47:12.900388 kubelet[1848]: E0904 15:47:12.900352 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:12.914443 systemd[1]: Started cri-containerd-1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6.scope - libcontainer container 1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6. Sep 4 15:47:12.952939 containerd[1507]: time="2025-09-04T15:47:12.952902156Z" level=info msg="StartContainer for \"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" returns successfully" Sep 4 15:47:13.213923 containerd[1507]: time="2025-09-04T15:47:13.213870289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" id:\"feaefd25fa0d982ca86e1526e96824e6201f455617b79682bd24710058ee1ee6\" pid:3758 exit_status:1 exited_at:{seconds:1757000833 nanos:213397991}" Sep 4 15:47:13.502631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424963241.mount: Deactivated successfully. Sep 4 15:47:13.900909 kubelet[1848]: E0904 15:47:13.900859 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:14.204474 containerd[1507]: time="2025-09-04T15:47:14.204352506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" id:\"572791698e2b63c7508efcc03bed695680d1c1a5c7332939c5cc7985567edc8b\" pid:3838 exit_status:1 exited_at:{seconds:1757000834 nanos:203904245}" Sep 4 15:47:14.240953 containerd[1507]: time="2025-09-04T15:47:14.240899965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:14.241462 containerd[1507]: time="2025-09-04T15:47:14.241431862Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 4 15:47:14.242322 containerd[1507]: time="2025-09-04T15:47:14.242273986Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:14.244845 containerd[1507]: time="2025-09-04T15:47:14.244813516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:14.246322 containerd[1507]: time="2025-09-04T15:47:14.245891710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.395018133s" Sep 4 15:47:14.246322 containerd[1507]: time="2025-09-04T15:47:14.245925868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 15:47:14.247602 containerd[1507]: time="2025-09-04T15:47:14.247571517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 15:47:14.248240 containerd[1507]: time="2025-09-04T15:47:14.248213409Z" level=info msg="CreateContainer within sandbox \"ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 15:47:14.255904 containerd[1507]: time="2025-09-04T15:47:14.255856639Z" level=info msg="Container 108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:14.260505 containerd[1507]: time="2025-09-04T15:47:14.260459240Z" level=info msg="CreateContainer within sandbox \"ee45f56cf5c2301b4a54995f4f2508f95e069f53020c2b4a54913bca64be07af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee\"" Sep 4 15:47:14.261148 containerd[1507]: time="2025-09-04T15:47:14.261121811Z" level=info msg="StartContainer for \"108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee\"" Sep 4 15:47:14.261895 containerd[1507]: time="2025-09-04T15:47:14.261872939Z" level=info msg="connecting to shim 108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee" address="unix:///run/containerd/s/2f096dd02d6dc593cc6a1390890b7cabba828af81c9be1fea943e8b3c30cd546" protocol=ttrpc version=3 Sep 4 15:47:14.284439 systemd[1]: Started cri-containerd-108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee.scope - libcontainer container 108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee. Sep 4 15:47:14.308401 containerd[1507]: time="2025-09-04T15:47:14.308354248Z" level=info msg="StartContainer for \"108d995b89093731590fc7e664aab742763670166f414e511227756670a050ee\" returns successfully" Sep 4 15:47:14.902073 kubelet[1848]: E0904 15:47:14.902021 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:15.136413 kubelet[1848]: E0904 15:47:15.136384 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:15.148344 kubelet[1848]: I0904 15:47:15.148224 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zf8sn" podStartSLOduration=63.260826083 podStartE2EDuration="1m26.148209045s" podCreationTimestamp="2025-09-04 15:45:49 +0000 UTC" firstStartedPulling="2025-09-04 15:46:51.359435308 +0000 UTC m=+30.415725633" lastFinishedPulling="2025-09-04 15:47:14.24681827 +0000 UTC m=+53.303108595" observedRunningTime="2025-09-04 15:47:15.147923457 +0000 UTC m=+54.204213782" watchObservedRunningTime="2025-09-04 15:47:15.148209045 +0000 UTC m=+54.204499370" Sep 4 15:47:15.148486 kubelet[1848]: I0904 15:47:15.148384 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-92gzs" podStartSLOduration=52.548416106 podStartE2EDuration="1m14.148378318s" podCreationTimestamp="2025-09-04 15:46:01 +0000 UTC" firstStartedPulling="2025-09-04 15:46:51.250437628 +0000 UTC m=+30.306727913" lastFinishedPulling="2025-09-04 15:47:12.8503998 +0000 UTC m=+51.906690125" observedRunningTime="2025-09-04 15:47:13.145391168 +0000 UTC m=+52.201681453" watchObservedRunningTime="2025-09-04 15:47:15.148378318 +0000 UTC m=+54.204668643" Sep 4 15:47:15.204667 containerd[1507]: time="2025-09-04T15:47:15.204554201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" id:\"670abd8ab451df163c2c63cd0c017fed3f341dd6d57717cbd6c4602de56b3e61\" pid:3897 exit_status:1 exited_at:{seconds:1757000835 nanos:204168496}" Sep 4 15:47:15.903178 kubelet[1848]: E0904 15:47:15.903113 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:16.137756 kubelet[1848]: E0904 15:47:16.137711 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:16.903580 kubelet[1848]: E0904 15:47:16.903539 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:17.138972 kubelet[1848]: E0904 15:47:17.138934 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:17.903840 kubelet[1848]: E0904 15:47:17.903795 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:18.140372 kubelet[1848]: E0904 15:47:18.140340 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:18.904351 kubelet[1848]: E0904 15:47:18.904291 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:19.560045 containerd[1507]: time="2025-09-04T15:47:19.560004435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:19.560690 containerd[1507]: time="2025-09-04T15:47:19.560650102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 4 15:47:19.561686 containerd[1507]: time="2025-09-04T15:47:19.561646242Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:19.563374 containerd[1507]: time="2025-09-04T15:47:19.563294809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:19.563808 containerd[1507]: time="2025-09-04T15:47:19.563775759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 5.316174043s" Sep 4 15:47:19.563858 containerd[1507]: time="2025-09-04T15:47:19.563808039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 4 15:47:19.565449 containerd[1507]: time="2025-09-04T15:47:19.565389367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 15:47:19.566179 containerd[1507]: time="2025-09-04T15:47:19.566156191Z" level=info msg="CreateContainer within sandbox \"a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 15:47:19.573332 containerd[1507]: time="2025-09-04T15:47:19.572169510Z" level=info msg="Container 3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:19.601998 containerd[1507]: time="2025-09-04T15:47:19.601920469Z" level=info msg="CreateContainer within sandbox \"a962146c456d3f05d681b8dae86410efe4ffa3dc5b0f58ed66afb8a4b02507af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913\"" Sep 4 15:47:19.602547 containerd[1507]: time="2025-09-04T15:47:19.602502457Z" level=info msg="StartContainer for \"3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913\"" Sep 4 15:47:19.603656 containerd[1507]: time="2025-09-04T15:47:19.603599235Z" level=info msg="connecting to shim 3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913" address="unix:///run/containerd/s/5cb1002f8205b8f00d057e568779836608ff4a9579f876af1e8fcc2ef86b46be" protocol=ttrpc version=3 Sep 4 15:47:19.642474 systemd[1]: Started cri-containerd-3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913.scope - libcontainer container 3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913. Sep 4 15:47:19.668427 containerd[1507]: time="2025-09-04T15:47:19.668385887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:19.668999 containerd[1507]: time="2025-09-04T15:47:19.668972075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Sep 4 15:47:19.673205 containerd[1507]: time="2025-09-04T15:47:19.673170070Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 107.750984ms" Sep 4 15:47:19.673205 containerd[1507]: time="2025-09-04T15:47:19.673202469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 15:47:19.674739 containerd[1507]: time="2025-09-04T15:47:19.674438164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 15:47:19.675875 containerd[1507]: time="2025-09-04T15:47:19.675845656Z" level=info msg="CreateContainer within sandbox \"e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 15:47:19.677025 containerd[1507]: time="2025-09-04T15:47:19.676977033Z" level=info msg="StartContainer for \"3317717fc2db18381adceeeafae30e0683cdc1dd805a0357bf1f52f256134913\" returns successfully" Sep 4 15:47:19.686725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580248862.mount: Deactivated successfully. Sep 4 15:47:19.688049 containerd[1507]: time="2025-09-04T15:47:19.688018210Z" level=info msg="Container 7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:19.696408 containerd[1507]: time="2025-09-04T15:47:19.696206165Z" level=info msg="CreateContainer within sandbox \"e54b01743b553022088327ce8ccf7a212d70292077cc08061a50bf031948a69b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511\"" Sep 4 15:47:19.697861 containerd[1507]: time="2025-09-04T15:47:19.696796953Z" level=info msg="StartContainer for \"7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511\"" Sep 4 15:47:19.697861 containerd[1507]: time="2025-09-04T15:47:19.697771413Z" level=info msg="connecting to shim 7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511" address="unix:///run/containerd/s/f39c4e197c99ea949bd1a2e805d932981fa30811e7ad0411ac7e9ef2e1c9866b" protocol=ttrpc version=3 Sep 4 15:47:19.717434 systemd[1]: Started cri-containerd-7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511.scope - libcontainer container 7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511. Sep 4 15:47:19.746348 containerd[1507]: time="2025-09-04T15:47:19.745688645Z" level=info msg="StartContainer for \"7b9272eeb7cf1c0319d3b97673038e540d50f2e9e249489ae315553913fe0511\" returns successfully" Sep 4 15:47:19.905235 kubelet[1848]: E0904 15:47:19.905199 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:20.147635 kubelet[1848]: E0904 15:47:20.147549 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:20.148476 containerd[1507]: time="2025-09-04T15:47:20.148400234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\" id:\"e52d862cb4e6ef04406b6dbf2fdf7097d3947255ff42ef9ed887aae9a6844bef\" pid:4004 exited_at:{seconds:1757000840 nanos:147803806}" Sep 4 15:47:20.158985 kubelet[1848]: I0904 15:47:20.158834 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jwndq" podStartSLOduration=63.039905974 podStartE2EDuration="1m31.15881807s" podCreationTimestamp="2025-09-04 15:45:49 +0000 UTC" firstStartedPulling="2025-09-04 15:46:51.555164116 +0000 UTC m=+30.611454441" lastFinishedPulling="2025-09-04 15:47:19.674076212 +0000 UTC m=+58.730366537" observedRunningTime="2025-09-04 15:47:20.158705712 +0000 UTC m=+59.214996037" watchObservedRunningTime="2025-09-04 15:47:20.15881807 +0000 UTC m=+59.215108395" Sep 4 15:47:20.185312 kubelet[1848]: I0904 15:47:20.185179 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857d4f49b9-p879q" podStartSLOduration=55.074600568 podStartE2EDuration="1m23.185161272s" podCreationTimestamp="2025-09-04 15:45:57 +0000 UTC" firstStartedPulling="2025-09-04 15:46:51.454125677 +0000 UTC m=+30.510416002" lastFinishedPulling="2025-09-04 15:47:19.564686381 +0000 UTC m=+58.620976706" observedRunningTime="2025-09-04 15:47:20.182996555 +0000 UTC m=+59.239286880" watchObservedRunningTime="2025-09-04 15:47:20.185161272 +0000 UTC m=+59.241451557" Sep 4 15:47:20.905610 kubelet[1848]: E0904 15:47:20.905558 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:21.152731 kubelet[1848]: E0904 15:47:21.152692 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:21.868606 kubelet[1848]: E0904 15:47:21.868539 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:21.906129 kubelet[1848]: E0904 15:47:21.906089 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:22.154911 kubelet[1848]: E0904 15:47:22.154850 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:47:22.907088 kubelet[1848]: E0904 15:47:22.907038 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:23.907413 kubelet[1848]: E0904 15:47:23.907360 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:24.908173 kubelet[1848]: E0904 15:47:24.908121 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:25.908819 kubelet[1848]: E0904 15:47:25.908784 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:26.910282 kubelet[1848]: E0904 15:47:26.910241 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:27.911685 kubelet[1848]: E0904 15:47:27.911607 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:28.632455 containerd[1507]: time="2025-09-04T15:47:28.632388578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:28.633391 containerd[1507]: time="2025-09-04T15:47:28.633224365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 4 15:47:28.634112 containerd[1507]: time="2025-09-04T15:47:28.634078592Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:28.636232 containerd[1507]: time="2025-09-04T15:47:28.636199798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:28.636825 containerd[1507]: time="2025-09-04T15:47:28.636700990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 8.962208387s" Sep 4 15:47:28.636825 containerd[1507]: time="2025-09-04T15:47:28.636738830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 4 15:47:28.638670 containerd[1507]: time="2025-09-04T15:47:28.638649599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 15:47:28.659571 containerd[1507]: time="2025-09-04T15:47:28.659518590Z" level=info msg="CreateContainer within sandbox \"66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 15:47:28.709532 containerd[1507]: time="2025-09-04T15:47:28.709481921Z" level=info msg="Container d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:28.718133 containerd[1507]: time="2025-09-04T15:47:28.718077506Z" level=info msg="CreateContainer within sandbox \"66c06ab2db8bdcc4870f14f307186cdd7d49e51662cc5a081ebe9166a2c23354\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09\"" Sep 4 15:47:28.718779 containerd[1507]: time="2025-09-04T15:47:28.718577218Z" level=info msg="StartContainer for \"d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09\"" Sep 4 15:47:28.719695 containerd[1507]: time="2025-09-04T15:47:28.719665881Z" level=info msg="connecting to shim d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09" address="unix:///run/containerd/s/9cee8f63ce10a0e5fba5134e12c18c0ced6bed568905b9401d8cd92c30ba20b4" protocol=ttrpc version=3 Sep 4 15:47:28.736521 systemd[1]: Started cri-containerd-d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09.scope - libcontainer container d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09. Sep 4 15:47:28.768895 containerd[1507]: time="2025-09-04T15:47:28.768856584Z" level=info msg="StartContainer for \"d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09\" returns successfully" Sep 4 15:47:28.912354 kubelet[1848]: E0904 15:47:28.912223 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:29.208810 containerd[1507]: time="2025-09-04T15:47:29.208706129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09\" id:\"055a5b598838f0847ead93e042d84d85ddae0bb513a51f3acc83f02119fb34fb\" pid:4097 exited_at:{seconds:1757000849 nanos:208291535}" Sep 4 15:47:29.223643 kubelet[1848]: I0904 15:47:29.223538 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68dbb789b-bbg75" podStartSLOduration=50.828864893 podStartE2EDuration="1m27.223521021s" podCreationTimestamp="2025-09-04 15:46:02 +0000 UTC" firstStartedPulling="2025-09-04 15:46:52.243490919 +0000 UTC m=+31.299781244" lastFinishedPulling="2025-09-04 15:47:28.638147047 +0000 UTC m=+67.694437372" observedRunningTime="2025-09-04 15:47:29.186709867 +0000 UTC m=+68.243000192" watchObservedRunningTime="2025-09-04 15:47:29.223521021 +0000 UTC m=+68.279811346" Sep 4 15:47:29.913351 kubelet[1848]: E0904 15:47:29.913287 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:30.914123 kubelet[1848]: E0904 15:47:30.914069 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:31.914783 kubelet[1848]: E0904 15:47:31.914722 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:32.915391 kubelet[1848]: E0904 15:47:32.915348 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:33.916192 kubelet[1848]: E0904 15:47:33.916125 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:34.917147 kubelet[1848]: E0904 15:47:34.917081 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:35.918123 kubelet[1848]: E0904 15:47:35.918007 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:36.919204 kubelet[1848]: E0904 15:47:36.919144 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:37.919635 kubelet[1848]: E0904 15:47:37.919583 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:38.043240 containerd[1507]: time="2025-09-04T15:47:38.042600688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:38.043653 containerd[1507]: time="2025-09-04T15:47:38.043277280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 4 15:47:38.043856 containerd[1507]: time="2025-09-04T15:47:38.043834473Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:38.045971 containerd[1507]: time="2025-09-04T15:47:38.045764009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:38.046327 containerd[1507]: time="2025-09-04T15:47:38.046283243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 9.407521445s" Sep 4 15:47:38.046395 containerd[1507]: time="2025-09-04T15:47:38.046380482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 4 15:47:38.048586 containerd[1507]: time="2025-09-04T15:47:38.048509936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 15:47:38.049508 containerd[1507]: time="2025-09-04T15:47:38.049469204Z" level=info msg="CreateContainer within sandbox \"bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 15:47:38.060757 containerd[1507]: time="2025-09-04T15:47:38.059664881Z" level=info msg="Container f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:38.067456 containerd[1507]: time="2025-09-04T15:47:38.067418706Z" level=info msg="CreateContainer within sandbox \"bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0\"" Sep 4 15:47:38.068343 containerd[1507]: time="2025-09-04T15:47:38.068016699Z" level=info msg="StartContainer for \"f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0\"" Sep 4 15:47:38.069382 containerd[1507]: time="2025-09-04T15:47:38.069355403Z" level=info msg="connecting to shim f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0" address="unix:///run/containerd/s/a74e50d14038941f5d37de56c8ef823fa94875636f24667c8cb89a90ae2313bc" protocol=ttrpc version=3 Sep 4 15:47:38.091488 systemd[1]: Started cri-containerd-f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0.scope - libcontainer container f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0. Sep 4 15:47:38.163004 containerd[1507]: time="2025-09-04T15:47:38.162961545Z" level=info msg="StartContainer for \"f9186f07767f7ba497d30ed18e075b1e15e3169925f0f1cdb70f227dbf9389e0\" returns successfully" Sep 4 15:47:38.390810 containerd[1507]: time="2025-09-04T15:47:38.390764697Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:38.392582 containerd[1507]: time="2025-09-04T15:47:38.391243811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 15:47:38.402851 containerd[1507]: time="2025-09-04T15:47:38.402784590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 354.231975ms" Sep 4 15:47:38.402851 containerd[1507]: time="2025-09-04T15:47:38.402842750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 4 15:47:38.403970 containerd[1507]: time="2025-09-04T15:47:38.403858177Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 4 15:47:38.405588 containerd[1507]: time="2025-09-04T15:47:38.404727687Z" level=info msg="CreateContainer within sandbox \"3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 15:47:38.412691 containerd[1507]: time="2025-09-04T15:47:38.410730534Z" level=info msg="Container 255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:38.418676 containerd[1507]: time="2025-09-04T15:47:38.418637758Z" level=info msg="CreateContainer within sandbox \"3c0c8ffee0782b61e957963ca91ae395042479b218d988f29a4e422f0ebbf705\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f\"" Sep 4 15:47:38.419338 containerd[1507]: time="2025-09-04T15:47:38.419293350Z" level=info msg="StartContainer for \"255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f\"" Sep 4 15:47:38.420386 containerd[1507]: time="2025-09-04T15:47:38.420276578Z" level=info msg="connecting to shim 255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f" address="unix:///run/containerd/s/1a6295a47a1720cfad8f463cdea084ee8b14b6658ad5e6ddebb2ede2e5f66d6c" protocol=ttrpc version=3 Sep 4 15:47:38.446464 systemd[1]: Started cri-containerd-255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f.scope - libcontainer container 255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f. Sep 4 15:47:38.481410 containerd[1507]: time="2025-09-04T15:47:38.481366915Z" level=info msg="StartContainer for \"255267abecad8e891800372c5aec5a95950babb3fb0650a241454811e7d40e2f\" returns successfully" Sep 4 15:47:38.920750 kubelet[1848]: E0904 15:47:38.920700 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:39.872671 kubelet[1848]: I0904 15:47:39.872597 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857d4f49b9-497bn" podStartSLOduration=58.669802216 podStartE2EDuration="1m42.872578032s" podCreationTimestamp="2025-09-04 15:45:57 +0000 UTC" firstStartedPulling="2025-09-04 15:46:54.200821925 +0000 UTC m=+33.257112250" lastFinishedPulling="2025-09-04 15:47:38.403597781 +0000 UTC m=+77.459888066" observedRunningTime="2025-09-04 15:47:39.211766463 +0000 UTC m=+78.268056788" watchObservedRunningTime="2025-09-04 15:47:39.872578032 +0000 UTC m=+78.928868357" Sep 4 15:47:39.920888 kubelet[1848]: E0904 15:47:39.920838 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:40.921427 kubelet[1848]: E0904 15:47:40.921325 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:41.868698 kubelet[1848]: E0904 15:47:41.868651 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:41.922314 kubelet[1848]: E0904 15:47:41.922276 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:42.922896 kubelet[1848]: E0904 15:47:42.922856 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:43.423981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056124355.mount: Deactivated successfully. Sep 4 15:47:43.923641 kubelet[1848]: E0904 15:47:43.923606 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:44.556802 containerd[1507]: time="2025-09-04T15:47:44.556759581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:44.557266 containerd[1507]: time="2025-09-04T15:47:44.557233936Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69986522" Sep 4 15:47:44.558081 containerd[1507]: time="2025-09-04T15:47:44.558039168Z" level=info msg="ImageCreate event name:\"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:44.560855 containerd[1507]: time="2025-09-04T15:47:44.560807819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:44.561708 containerd[1507]: time="2025-09-04T15:47:44.561673010Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 6.157787473s" Sep 4 15:47:44.561747 containerd[1507]: time="2025-09-04T15:47:44.561708569Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 4 15:47:44.562878 containerd[1507]: time="2025-09-04T15:47:44.562846717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 15:47:44.564015 containerd[1507]: time="2025-09-04T15:47:44.563987785Z" level=info msg="CreateContainer within sandbox \"1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 4 15:47:44.571166 containerd[1507]: time="2025-09-04T15:47:44.570625916Z" level=info msg="Container 281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:44.578224 containerd[1507]: time="2025-09-04T15:47:44.578180837Z" level=info msg="CreateContainer within sandbox \"1fb61296bbf2d1504ead41995c2f4b86f7e3e5bc4337920534208444c61a240e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c\"" Sep 4 15:47:44.578981 containerd[1507]: time="2025-09-04T15:47:44.578938109Z" level=info msg="StartContainer for \"281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c\"" Sep 4 15:47:44.579850 containerd[1507]: time="2025-09-04T15:47:44.579815020Z" level=info msg="connecting to shim 281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c" address="unix:///run/containerd/s/1a104dc12d8401c9af03418f2add981fb44e4c30f21832e27316cbd704b1aeec" protocol=ttrpc version=3 Sep 4 15:47:44.596452 systemd[1]: Started cri-containerd-281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c.scope - libcontainer container 281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c. Sep 4 15:47:44.624869 containerd[1507]: time="2025-09-04T15:47:44.624736589Z" level=info msg="StartContainer for \"281e1f881104457f47a5415158ef440245568dc659051c95b99c055e9b75b73c\" returns successfully" Sep 4 15:47:44.924339 kubelet[1848]: E0904 15:47:44.924277 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:45.202521 containerd[1507]: time="2025-09-04T15:47:45.196094377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" id:\"babc74174cbd7cdf2b89b0efa153c82604cac9f1cf6220dfe32dbd0a12ded6c2\" pid:4284 exited_at:{seconds:1757000865 nanos:195313985}" Sep 4 15:47:45.924596 kubelet[1848]: E0904 15:47:45.924537 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:46.925626 kubelet[1848]: E0904 15:47:46.925580 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:47.841805 kubelet[1848]: I0904 15:47:47.841745 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-nz2hk" podStartSLOduration=16.491006832 podStartE2EDuration="1m4.84172375s" podCreationTimestamp="2025-09-04 15:46:43 +0000 UTC" firstStartedPulling="2025-09-04 15:46:56.211975001 +0000 UTC m=+35.268265286" lastFinishedPulling="2025-09-04 15:47:44.562691879 +0000 UTC m=+83.618982204" observedRunningTime="2025-09-04 15:47:45.229625554 +0000 UTC m=+84.285915879" watchObservedRunningTime="2025-09-04 15:47:47.84172375 +0000 UTC m=+86.898014075" Sep 4 15:47:47.848474 systemd[1]: Created slice kubepods-besteffort-podfc6dacb7_16a9_48e3_bcd3_ec09db035169.slice - libcontainer container kubepods-besteffort-podfc6dacb7_16a9_48e3_bcd3_ec09db035169.slice. Sep 4 15:47:47.912871 kubelet[1848]: I0904 15:47:47.912779 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6d7\" (UniqueName: \"kubernetes.io/projected/fc6dacb7-16a9-48e3-bcd3-ec09db035169-kube-api-access-9m6d7\") pod \"nfs-server-provisioner-0\" (UID: \"fc6dacb7-16a9-48e3-bcd3-ec09db035169\") " pod="default/nfs-server-provisioner-0" Sep 4 15:47:47.912871 kubelet[1848]: I0904 15:47:47.912822 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fc6dacb7-16a9-48e3-bcd3-ec09db035169-data\") pod \"nfs-server-provisioner-0\" (UID: \"fc6dacb7-16a9-48e3-bcd3-ec09db035169\") " pod="default/nfs-server-provisioner-0" Sep 4 15:47:47.925914 kubelet[1848]: E0904 15:47:47.925867 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:48.151775 containerd[1507]: time="2025-09-04T15:47:48.151732845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fc6dacb7-16a9-48e3-bcd3-ec09db035169,Namespace:default,Attempt:0,}" Sep 4 15:47:48.267896 systemd-networkd[1427]: cali60e51b789ff: Link UP Sep 4 15:47:48.268055 systemd-networkd[1427]: cali60e51b789ff: Gained carrier Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.198 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default fc6dacb7-16a9-48e3-bcd3-ec09db035169 1380 0 2025-09-04 15:47:47 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.45 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.198 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.223 [INFO][4318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" HandleID="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Workload="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.223 [INFO][4318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" HandleID="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Workload="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e70), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.45", "pod":"nfs-server-provisioner-0", "timestamp":"2025-09-04 15:47:48.222993687 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.223 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.223 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.223 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.234 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.238 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.242 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.244 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.248 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.249 [INFO][4318] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.250 [INFO][4318] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.255 [INFO][4318] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.263 [INFO][4318] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.138/26] block=192.168.34.128/26 handle="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.263 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.138/26] handle="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" host="10.0.0.45" Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.263 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:47:48.280742 containerd[1507]: 2025-09-04 15:47:48.263 [INFO][4318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.138/26] IPv6=[] ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" HandleID="k8s-pod-network.4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Workload="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.281240 containerd[1507]: 2025-09-04 15:47:48.265 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"fc6dacb7-16a9-48e3-bcd3-ec09db035169", ResourceVersion:"1380", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 47, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.34.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:47:48.281240 containerd[1507]: 2025-09-04 15:47:48.265 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.138/32] ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.281240 containerd[1507]: 2025-09-04 15:47:48.265 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.281240 containerd[1507]: 2025-09-04 15:47:48.267 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.281395 containerd[1507]: 2025-09-04 15:47:48.268 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"fc6dacb7-16a9-48e3-bcd3-ec09db035169", ResourceVersion:"1380", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 47, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.34.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a2:4c:3e:3f:9b:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:47:48.281395 containerd[1507]: 2025-09-04 15:47:48.278 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.45-k8s-nfs--server--provisioner--0-eth0" Sep 4 15:47:48.317781 containerd[1507]: time="2025-09-04T15:47:48.317663227Z" level=info msg="connecting to shim 4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e" address="unix:///run/containerd/s/89df88bb09dfcf95c56ce1029045ec3bb4b523245d4fdde9d735331ba4938fcb" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:47:48.338484 systemd[1]: Started cri-containerd-4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e.scope - libcontainer container 4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e. Sep 4 15:47:48.348665 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:47:48.394783 containerd[1507]: time="2025-09-04T15:47:48.394728854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fc6dacb7-16a9-48e3-bcd3-ec09db035169,Namespace:default,Attempt:0,} returns sandbox id \"4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e\"" Sep 4 15:47:48.926832 kubelet[1848]: E0904 15:47:48.926777 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:48.999697 containerd[1507]: time="2025-09-04T15:47:48.999652980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:49.000245 containerd[1507]: time="2025-09-04T15:47:49.000105256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 4 15:47:49.000966 containerd[1507]: time="2025-09-04T15:47:49.000929008Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:49.002810 containerd[1507]: time="2025-09-04T15:47:49.002780151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:49.004127 containerd[1507]: time="2025-09-04T15:47:49.004099099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 4.441218782s" Sep 4 15:47:49.004206 containerd[1507]: time="2025-09-04T15:47:49.004130499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 4 15:47:49.005321 containerd[1507]: time="2025-09-04T15:47:49.004930451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 15:47:49.005979 containerd[1507]: time="2025-09-04T15:47:49.005951522Z" level=info msg="CreateContainer within sandbox \"07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 15:47:49.012193 containerd[1507]: time="2025-09-04T15:47:49.012135184Z" level=info msg="Container c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:49.019814 containerd[1507]: time="2025-09-04T15:47:49.019766753Z" level=info msg="CreateContainer within sandbox \"07c7c6ec8eb55465d13009f296462220325ff25841d77e40c9d643a7b7ef4261\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21\"" Sep 4 15:47:49.021326 containerd[1507]: time="2025-09-04T15:47:49.020196549Z" level=info msg="StartContainer for \"c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21\"" Sep 4 15:47:49.021326 containerd[1507]: time="2025-09-04T15:47:49.021207260Z" level=info msg="connecting to shim c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21" address="unix:///run/containerd/s/0c8dd6b31d32467deb9993154dff753ea0a99b54f00068eae21c9faf4d8dd752" protocol=ttrpc version=3 Sep 4 15:47:49.026813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672141011.mount: Deactivated successfully. Sep 4 15:47:49.052557 systemd[1]: Started cri-containerd-c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21.scope - libcontainer container c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21. Sep 4 15:47:49.086534 containerd[1507]: time="2025-09-04T15:47:49.086496613Z" level=info msg="StartContainer for \"c0b86dcb04447a311ff672cfc5ba2091926f6a99832ecbfdb154c65d42e8ee21\" returns successfully" Sep 4 15:47:49.242698 kubelet[1848]: I0904 15:47:49.242520 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66bd4fb94d-w5l74" podStartSLOduration=0.960687932 podStartE2EDuration="1m1.242501324s" podCreationTimestamp="2025-09-04 15:46:48 +0000 UTC" firstStartedPulling="2025-09-04 15:46:48.72300158 +0000 UTC m=+27.779291905" lastFinishedPulling="2025-09-04 15:47:49.004814972 +0000 UTC m=+88.061105297" observedRunningTime="2025-09-04 15:47:49.242023288 +0000 UTC m=+88.298313613" watchObservedRunningTime="2025-09-04 15:47:49.242501324 +0000 UTC m=+88.298791689" Sep 4 15:47:49.316707 systemd-networkd[1427]: cali60e51b789ff: Gained IPv6LL Sep 4 15:47:49.927631 kubelet[1848]: E0904 15:47:49.927578 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:50.151106 containerd[1507]: time="2025-09-04T15:47:50.151069834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46a3d45ac13d71270f6d99a2710e3d01902e941ee36de2a741f6296b1110c09c\" id:\"181a99fb2573f2db5ae905c19136bc08265e575b626da66f32ba6d40ee5c7900\" pid:4437 exited_at:{seconds:1757000870 nanos:150768117}" Sep 4 15:47:50.928563 kubelet[1848]: E0904 15:47:50.928501 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:51.418115 containerd[1507]: time="2025-09-04T15:47:51.418045860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" id:\"69482979c23a37c59d046b2edac6b12db3435d8a41795339c7cdb0c541eea8a0\" pid:4462 exited_at:{seconds:1757000871 nanos:417776063}" Sep 4 15:47:51.928969 kubelet[1848]: E0904 15:47:51.928926 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:52.929975 kubelet[1848]: E0904 15:47:52.929933 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:53.930315 kubelet[1848]: E0904 15:47:53.930257 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:54.679847 containerd[1507]: time="2025-09-04T15:47:54.679786763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:54.680487 containerd[1507]: time="2025-09-04T15:47:54.680415798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 4 15:47:54.681157 containerd[1507]: time="2025-09-04T15:47:54.681106673Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:54.684338 containerd[1507]: time="2025-09-04T15:47:54.684222087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:54.685178 containerd[1507]: time="2025-09-04T15:47:54.685058480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 5.680099989s" Sep 4 15:47:54.685178 containerd[1507]: time="2025-09-04T15:47:54.685091360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 4 15:47:54.686535 containerd[1507]: time="2025-09-04T15:47:54.686494548Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 4 15:47:54.688446 containerd[1507]: time="2025-09-04T15:47:54.688387412Z" level=info msg="CreateContainer within sandbox \"bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 15:47:54.707001 containerd[1507]: time="2025-09-04T15:47:54.705871067Z" level=info msg="Container c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:54.714916 containerd[1507]: time="2025-09-04T15:47:54.714874593Z" level=info msg="CreateContainer within sandbox \"bc4ee81332f39a238923a51d6cf6a0b046e1603c8bc933fa10405817d910ee00\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f\"" Sep 4 15:47:54.715546 containerd[1507]: time="2025-09-04T15:47:54.715512788Z" level=info msg="StartContainer for \"c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f\"" Sep 4 15:47:54.717506 containerd[1507]: time="2025-09-04T15:47:54.717478171Z" level=info msg="connecting to shim c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f" address="unix:///run/containerd/s/a74e50d14038941f5d37de56c8ef823fa94875636f24667c8cb89a90ae2313bc" protocol=ttrpc version=3 Sep 4 15:47:54.742544 systemd[1]: Started cri-containerd-c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f.scope - libcontainer container c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f. Sep 4 15:47:54.786201 containerd[1507]: time="2025-09-04T15:47:54.786152122Z" level=info msg="StartContainer for \"c31d475404b0ac66b9318641ff2c2efdcc8a007c1332243577f0273b3defd15f\" returns successfully" Sep 4 15:47:54.931025 kubelet[1848]: E0904 15:47:54.930917 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:55.078379 kubelet[1848]: I0904 15:47:55.078330 1848 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 15:47:55.078379 kubelet[1848]: I0904 15:47:55.078383 1848 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 15:47:55.342647 kubelet[1848]: I0904 15:47:55.342515 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4tbln" podStartSLOduration=31.874877648000002 podStartE2EDuration="1m33.342448696s" podCreationTimestamp="2025-09-04 15:46:22 +0000 UTC" firstStartedPulling="2025-09-04 15:46:53.218562103 +0000 UTC m=+32.274852428" lastFinishedPulling="2025-09-04 15:47:54.686133071 +0000 UTC m=+93.742423476" observedRunningTime="2025-09-04 15:47:55.342133098 +0000 UTC m=+94.398423423" watchObservedRunningTime="2025-09-04 15:47:55.342448696 +0000 UTC m=+94.398739021" Sep 4 15:47:55.931839 kubelet[1848]: E0904 15:47:55.931762 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:56.932832 kubelet[1848]: E0904 15:47:56.932702 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:57.400589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556230474.mount: Deactivated successfully. Sep 4 15:47:57.932986 kubelet[1848]: E0904 15:47:57.932950 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:58.905035 containerd[1507]: time="2025-09-04T15:47:58.904986341Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Sep 4 15:47:58.909606 containerd[1507]: time="2025-09-04T15:47:58.909561506Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.223023438s" Sep 4 15:47:58.909606 containerd[1507]: time="2025-09-04T15:47:58.909606986Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 4 15:47:58.911704 containerd[1507]: time="2025-09-04T15:47:58.911667290Z" level=info msg="CreateContainer within sandbox \"4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 4 15:47:58.913967 containerd[1507]: time="2025-09-04T15:47:58.913629235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:58.914472 containerd[1507]: time="2025-09-04T15:47:58.914435829Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:58.915129 containerd[1507]: time="2025-09-04T15:47:58.915101504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:47:58.918135 containerd[1507]: time="2025-09-04T15:47:58.917289728Z" level=info msg="Container 4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:47:58.925914 containerd[1507]: time="2025-09-04T15:47:58.925873543Z" level=info msg="CreateContainer within sandbox \"4e7bef2d804c232221fb1b497b75ee071440bdd3cb623ea64d2f47e1d14c137e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692\"" Sep 4 15:47:58.926714 containerd[1507]: time="2025-09-04T15:47:58.926658457Z" level=info msg="StartContainer for \"4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692\"" Sep 4 15:47:58.927646 containerd[1507]: time="2025-09-04T15:47:58.927620089Z" level=info msg="connecting to shim 4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692" address="unix:///run/containerd/s/89df88bb09dfcf95c56ce1029045ec3bb4b523245d4fdde9d735331ba4938fcb" protocol=ttrpc version=3 Sep 4 15:47:58.933887 kubelet[1848]: E0904 15:47:58.933855 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:47:58.949521 systemd[1]: Started cri-containerd-4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692.scope - libcontainer container 4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692. Sep 4 15:47:58.975990 containerd[1507]: time="2025-09-04T15:47:58.975955043Z" level=info msg="StartContainer for \"4b14fde1367d230c1669d390f62db728fe3f7599f7e9da2ee7aca11dc1017692\" returns successfully" Sep 4 15:47:59.200439 containerd[1507]: time="2025-09-04T15:47:59.199702417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09\" id:\"80a93343b6b2418d6ed3e14ec4470a69eba70070a5a22ec8ee3099aaa6862be8\" pid:4622 exited_at:{seconds:1757000879 nanos:199492738}" Sep 4 15:47:59.299669 kubelet[1848]: I0904 15:47:59.299595 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.785244777 podStartE2EDuration="12.299578275s" podCreationTimestamp="2025-09-04 15:47:47 +0000 UTC" firstStartedPulling="2025-09-04 15:47:48.395893083 +0000 UTC m=+87.452183408" lastFinishedPulling="2025-09-04 15:47:58.910226581 +0000 UTC m=+97.966516906" observedRunningTime="2025-09-04 15:47:59.299066159 +0000 UTC m=+98.355356484" watchObservedRunningTime="2025-09-04 15:47:59.299578275 +0000 UTC m=+98.355868600" Sep 4 15:47:59.934937 kubelet[1848]: E0904 15:47:59.934888 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:00.935200 kubelet[1848]: E0904 15:48:00.935155 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:01.868740 kubelet[1848]: E0904 15:48:01.868665 1848 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:01.935385 kubelet[1848]: E0904 15:48:01.935332 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:02.935686 kubelet[1848]: E0904 15:48:02.935634 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:03.015113 kubelet[1848]: E0904 15:48:03.015074 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 15:48:03.936100 kubelet[1848]: E0904 15:48:03.936053 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:04.936225 kubelet[1848]: E0904 15:48:04.936164 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:05.937055 kubelet[1848]: E0904 15:48:05.937009 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:06.937622 kubelet[1848]: E0904 15:48:06.937570 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:07.938356 kubelet[1848]: E0904 15:48:07.938286 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:08.357154 systemd[1]: Created slice kubepods-besteffort-poda3f4a4bc_0e5c_4f99_88fc_25ad12e0aa3e.slice - libcontainer container kubepods-besteffort-poda3f4a4bc_0e5c_4f99_88fc_25ad12e0aa3e.slice. Sep 4 15:48:08.449065 kubelet[1848]: I0904 15:48:08.448955 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6fs\" (UniqueName: \"kubernetes.io/projected/a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e-kube-api-access-bd6fs\") pod \"test-pod-1\" (UID: \"a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e\") " pod="default/test-pod-1" Sep 4 15:48:08.450575 kubelet[1848]: I0904 15:48:08.449015 1848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7adfb6e5-7eb5-4603-a7e5-4ac54362b6e0\" (UniqueName: \"kubernetes.io/nfs/a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e-pvc-7adfb6e5-7eb5-4603-a7e5-4ac54362b6e0\") pod \"test-pod-1\" (UID: \"a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e\") " pod="default/test-pod-1" Sep 4 15:48:08.577341 kernel: netfs: FS-Cache loaded Sep 4 15:48:08.600647 kernel: RPC: Registered named UNIX socket transport module. Sep 4 15:48:08.600746 kernel: RPC: Registered udp transport module. Sep 4 15:48:08.600765 kernel: RPC: Registered tcp transport module. Sep 4 15:48:08.600781 kernel: RPC: Registered tcp-with-tls transport module. Sep 4 15:48:08.602398 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 4 15:48:08.775456 kernel: NFS: Registering the id_resolver key type Sep 4 15:48:08.775574 kernel: Key type id_resolver registered Sep 4 15:48:08.775612 kernel: Key type id_legacy registered Sep 4 15:48:08.793028 nfsidmap[4662]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Sep 4 15:48:08.793609 nfsidmap[4662]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 4 15:48:08.796732 nfsidmap[4665]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Sep 4 15:48:08.796904 nfsidmap[4665]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 4 15:48:08.802069 nfsrahead[4669]: setting /var/lib/kubelet/pods/a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e/volumes/kubernetes.io~nfs/pvc-7adfb6e5-7eb5-4603-a7e5-4ac54362b6e0 readahead to 128 Sep 4 15:48:08.938774 kubelet[1848]: E0904 15:48:08.938725 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:08.960728 containerd[1507]: time="2025-09-04T15:48:08.960684128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e,Namespace:default,Attempt:0,}" Sep 4 15:48:09.061444 systemd-networkd[1427]: cali5ec59c6bf6e: Link UP Sep 4 15:48:09.061882 systemd-networkd[1427]: cali5ec59c6bf6e: Gained carrier Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:08.996 [INFO][4670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.45-k8s-test--pod--1-eth0 default a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e 1475 0 2025-09-04 15:47:48 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.45 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:08.996 [INFO][4670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.020 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" HandleID="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Workload="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.020 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" HandleID="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Workload="10.0.0.45-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001377c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.45", "pod":"test-pod-1", "timestamp":"2025-09-04 15:48:09.020752519 +0000 UTC"}, Hostname:"10.0.0.45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.020 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.020 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.021 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.45' Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.031 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.036 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.040 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.042 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.044 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.045 [INFO][4684] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.046 [INFO][4684] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.050 [INFO][4684] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.057 [INFO][4684] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.34.139/26] block=192.168.34.128/26 handle="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.057 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.139/26] handle="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" host="10.0.0.45" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.057 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.057 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.139/26] IPv6=[] ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" HandleID="k8s-pod-network.4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Workload="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.073369 containerd[1507]: 2025-09-04 15:48:09.059 [INFO][4670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e", ResourceVersion:"1475", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.139/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:48:09.073864 containerd[1507]: 2025-09-04 15:48:09.059 [INFO][4670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.139/32] ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.073864 containerd[1507]: 2025-09-04 15:48:09.059 [INFO][4670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.073864 containerd[1507]: 2025-09-04 15:48:09.062 [INFO][4670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.073864 containerd[1507]: 2025-09-04 15:48:09.062 [INFO][4670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.45-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e", ResourceVersion:"1475", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 15, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.45", ContainerID:"4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.139/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"f6:89:e6:a4:9b:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 15:48:09.073864 containerd[1507]: 2025-09-04 15:48:09.071 [INFO][4670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.45-k8s-test--pod--1-eth0" Sep 4 15:48:09.098040 containerd[1507]: time="2025-09-04T15:48:09.097988770Z" level=info msg="connecting to shim 4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f" address="unix:///run/containerd/s/824c3935ec9af0ec18a792e0f47f6fb6987cdd529cedcd2ab52c1fe9ff3ad91a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 15:48:09.125537 systemd[1]: Started cri-containerd-4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f.scope - libcontainer container 4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f. Sep 4 15:48:09.135778 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 15:48:09.154890 containerd[1507]: time="2025-09-04T15:48:09.154854705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a3f4a4bc-0e5c-4f99-88fc-25ad12e0aa3e,Namespace:default,Attempt:0,} returns sandbox id \"4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f\"" Sep 4 15:48:09.156065 containerd[1507]: time="2025-09-04T15:48:09.156020858Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 4 15:48:09.784104 containerd[1507]: time="2025-09-04T15:48:09.784054365Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 15:48:09.784700 containerd[1507]: time="2025-09-04T15:48:09.784528722Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 4 15:48:09.787208 containerd[1507]: time="2025-09-04T15:48:09.787174666Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 631.124649ms" Sep 4 15:48:09.787208 containerd[1507]: time="2025-09-04T15:48:09.787211146Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 4 15:48:09.788994 containerd[1507]: time="2025-09-04T15:48:09.788965335Z" level=info msg="CreateContainer within sandbox \"4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 4 15:48:09.795908 containerd[1507]: time="2025-09-04T15:48:09.795876493Z" level=info msg="Container d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0: CDI devices from CRI Config.CDIDevices: []" Sep 4 15:48:09.803446 containerd[1507]: time="2025-09-04T15:48:09.803398447Z" level=info msg="CreateContainer within sandbox \"4d95da88e1d613e24cb44d701ddc6864edad2a8bdcf59fd3ca13ee1ccb08c00f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0\"" Sep 4 15:48:09.804042 containerd[1507]: time="2025-09-04T15:48:09.803942804Z" level=info msg="StartContainer for \"d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0\"" Sep 4 15:48:09.805072 containerd[1507]: time="2025-09-04T15:48:09.805045997Z" level=info msg="connecting to shim d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0" address="unix:///run/containerd/s/824c3935ec9af0ec18a792e0f47f6fb6987cdd529cedcd2ab52c1fe9ff3ad91a" protocol=ttrpc version=3 Sep 4 15:48:09.826459 systemd[1]: Started cri-containerd-d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0.scope - libcontainer container d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0. Sep 4 15:48:09.851381 containerd[1507]: time="2025-09-04T15:48:09.851318837Z" level=info msg="StartContainer for \"d38330e7e4b801f9954c9ddaf0166a74ce5eb9873abe911c273f9c3e6b1307f0\" returns successfully" Sep 4 15:48:09.939222 kubelet[1848]: E0904 15:48:09.939149 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:10.298688 kubelet[1848]: I0904 15:48:10.298607 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.666484872 podStartE2EDuration="22.298590235s" podCreationTimestamp="2025-09-04 15:47:48 +0000 UTC" firstStartedPulling="2025-09-04 15:48:09.155705499 +0000 UTC m=+108.211995824" lastFinishedPulling="2025-09-04 15:48:09.787810902 +0000 UTC m=+108.844101187" observedRunningTime="2025-09-04 15:48:10.298363036 +0000 UTC m=+109.354653361" watchObservedRunningTime="2025-09-04 15:48:10.298590235 +0000 UTC m=+109.354880560" Sep 4 15:48:10.372506 systemd-networkd[1427]: cali5ec59c6bf6e: Gained IPv6LL Sep 4 15:48:10.939633 kubelet[1848]: E0904 15:48:10.939586 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:11.940674 kubelet[1848]: E0904 15:48:11.940628 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:12.941454 kubelet[1848]: E0904 15:48:12.941404 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:13.941727 kubelet[1848]: E0904 15:48:13.941678 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:14.942118 kubelet[1848]: E0904 15:48:14.942061 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 4 15:48:15.063694 containerd[1507]: time="2025-09-04T15:48:15.063659105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2b2d4479f6bc4eb4b076179957dacb133bd1b1808caf540eceea6543c493a09\" id:\"d3148d1247fa7d6d74ce328071a5bd4e5787feeb8efc9b5d1a8387ecc912eac2\" pid:4818 exited_at:{seconds:1757000895 nanos:63468226}" Sep 4 15:48:15.200116 containerd[1507]: time="2025-09-04T15:48:15.200003443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aa3454586af3c6d13e264a2ad2f2f400c52037b1dc809ef709ef9789bcd46c6\" id:\"3abf82fc02f2be861ea3b82354bf99ca9c82c7b306f9cbe84955480eb2b70ab9\" pid:4840 exited_at:{seconds:1757000895 nanos:199725844}" Sep 4 15:48:15.943088 kubelet[1848]: E0904 15:48:15.942980 1848 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"