Sep 5 06:01:38.757129 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 06:01:38.757157 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 5 04:25:57 -00 2025 Sep 5 06:01:38.757169 kernel: KASLR enabled Sep 5 06:01:38.757176 kernel: efi: EFI v2.7 by EDK II Sep 5 06:01:38.757182 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 5 06:01:38.757187 kernel: random: crng init done Sep 5 06:01:38.757194 kernel: secureboot: Secure boot disabled Sep 5 06:01:38.757200 kernel: ACPI: Early table checksum verification disabled Sep 5 06:01:38.757205 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 5 06:01:38.757212 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 06:01:38.757218 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757224 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757229 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757236 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757243 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757251 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757257 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757263 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757269 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:01:38.757275 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 06:01:38.757281 kernel: ACPI: Use ACPI SPCR as default console: No Sep 5 06:01:38.757288 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 06:01:38.757294 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 5 06:01:38.757300 kernel: Zone ranges: Sep 5 06:01:38.757306 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 06:01:38.757313 kernel: DMA32 empty Sep 5 06:01:38.757319 kernel: Normal empty Sep 5 06:01:38.757325 kernel: Device empty Sep 5 06:01:38.757331 kernel: Movable zone start for each node Sep 5 06:01:38.757337 kernel: Early memory node ranges Sep 5 06:01:38.757342 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 5 06:01:38.757348 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 5 06:01:38.757354 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 5 06:01:38.757360 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 5 06:01:38.757366 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 5 06:01:38.757372 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 5 06:01:38.757378 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 5 06:01:38.757385 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 5 06:01:38.757391 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 5 06:01:38.757397 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 06:01:38.757405 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 06:01:38.757412 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 06:01:38.757418 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 06:01:38.757425 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 06:01:38.757432 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 06:01:38.757438 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 5 06:01:38.757444 kernel: psci: probing for conduit method from ACPI. Sep 5 06:01:38.757451 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 06:01:38.757457 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 06:01:38.757463 kernel: psci: Trusted OS migration not required Sep 5 06:01:38.757469 kernel: psci: SMC Calling Convention v1.1 Sep 5 06:01:38.757476 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 06:01:38.757482 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 5 06:01:38.757490 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 5 06:01:38.757496 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 06:01:38.757503 kernel: Detected PIPT I-cache on CPU0 Sep 5 06:01:38.757509 kernel: CPU features: detected: GIC system register CPU interface Sep 5 06:01:38.757515 kernel: CPU features: detected: Spectre-v4 Sep 5 06:01:38.757522 kernel: CPU features: detected: Spectre-BHB Sep 5 06:01:38.757528 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 06:01:38.757534 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 06:01:38.757541 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 06:01:38.757563 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 06:01:38.757569 kernel: alternatives: applying boot alternatives Sep 5 06:01:38.757577 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ad0560d5d82b42c8405832aa39f4f52a20b919c503afe4e7ecc72adb2e451fae Sep 5 06:01:38.757585 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 06:01:38.757592 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 06:01:38.757598 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 06:01:38.757604 kernel: Fallback order for Node 0: 0 Sep 5 06:01:38.757611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 5 06:01:38.757617 kernel: Policy zone: DMA Sep 5 06:01:38.757623 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 06:01:38.757630 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 5 06:01:38.757636 kernel: software IO TLB: area num 4. Sep 5 06:01:38.757642 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 5 06:01:38.757649 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 5 06:01:38.757656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 06:01:38.757663 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 06:01:38.757670 kernel: rcu: RCU event tracing is enabled. Sep 5 06:01:38.757676 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 06:01:38.757683 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 06:01:38.757689 kernel: Tracing variant of Tasks RCU enabled. Sep 5 06:01:38.757696 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 06:01:38.757702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 06:01:38.757708 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:01:38.757720 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:01:38.757727 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 06:01:38.757734 kernel: GICv3: 256 SPIs implemented Sep 5 06:01:38.757741 kernel: GICv3: 0 Extended SPIs implemented Sep 5 06:01:38.757747 kernel: Root IRQ handler: gic_handle_irq Sep 5 06:01:38.757753 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 06:01:38.757759 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 5 06:01:38.757765 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 06:01:38.757772 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 06:01:38.757778 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 5 06:01:38.757785 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 5 06:01:38.757791 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 5 06:01:38.757797 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 5 06:01:38.757804 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 06:01:38.757811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:01:38.757818 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 06:01:38.757824 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 06:01:38.757831 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 06:01:38.757837 kernel: arm-pv: using stolen time PV Sep 5 06:01:38.757844 kernel: Console: colour dummy device 80x25 Sep 5 06:01:38.757850 kernel: ACPI: Core revision 20240827 Sep 5 06:01:38.757857 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 06:01:38.757863 kernel: pid_max: default: 32768 minimum: 301 Sep 5 06:01:38.757870 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 06:01:38.757877 kernel: landlock: Up and running. Sep 5 06:01:38.757884 kernel: SELinux: Initializing. Sep 5 06:01:38.757890 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:01:38.757897 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:01:38.757903 kernel: rcu: Hierarchical SRCU implementation. Sep 5 06:01:38.757910 kernel: rcu: Max phase no-delay instances is 400. Sep 5 06:01:38.757917 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 06:01:38.757923 kernel: Remapping and enabling EFI services. Sep 5 06:01:38.757930 kernel: smp: Bringing up secondary CPUs ... Sep 5 06:01:38.757942 kernel: Detected PIPT I-cache on CPU1 Sep 5 06:01:38.757949 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 06:01:38.757956 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 5 06:01:38.757964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:01:38.757971 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 06:01:38.757978 kernel: Detected PIPT I-cache on CPU2 Sep 5 06:01:38.757985 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 06:01:38.757994 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 5 06:01:38.758002 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:01:38.758009 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 06:01:38.758016 kernel: Detected PIPT I-cache on CPU3 Sep 5 06:01:38.758023 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 06:01:38.758030 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 5 06:01:38.758037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:01:38.758043 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 06:01:38.758050 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 06:01:38.758057 kernel: SMP: Total of 4 processors activated. Sep 5 06:01:38.758065 kernel: CPU: All CPU(s) started at EL1 Sep 5 06:01:38.758072 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 06:01:38.758079 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 06:01:38.758086 kernel: CPU features: detected: Common not Private translations Sep 5 06:01:38.758093 kernel: CPU features: detected: CRC32 instructions Sep 5 06:01:38.758100 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 06:01:38.758106 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 06:01:38.758113 kernel: CPU features: detected: LSE atomic instructions Sep 5 06:01:38.758120 kernel: CPU features: detected: Privileged Access Never Sep 5 06:01:38.758128 kernel: CPU features: detected: RAS Extension Support Sep 5 06:01:38.758135 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 06:01:38.758142 kernel: alternatives: applying system-wide alternatives Sep 5 06:01:38.758149 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 5 06:01:38.758156 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 5 06:01:38.758163 kernel: devtmpfs: initialized Sep 5 06:01:38.758170 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 06:01:38.758177 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 06:01:38.758184 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 06:01:38.758192 kernel: 0 pages in range for non-PLT usage Sep 5 06:01:38.758199 kernel: 508576 pages in range for PLT usage Sep 5 06:01:38.758207 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 06:01:38.758216 kernel: SMBIOS 3.0.0 present. Sep 5 06:01:38.758223 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 5 06:01:38.758230 kernel: DMI: Memory slots populated: 1/1 Sep 5 06:01:38.758237 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 06:01:38.758247 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 06:01:38.758255 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 06:01:38.758263 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 06:01:38.758270 kernel: audit: initializing netlink subsys (disabled) Sep 5 06:01:38.758277 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 5 06:01:38.758284 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 06:01:38.758291 kernel: cpuidle: using governor menu Sep 5 06:01:38.758298 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 06:01:38.758304 kernel: ASID allocator initialised with 32768 entries Sep 5 06:01:38.758311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 06:01:38.758318 kernel: Serial: AMBA PL011 UART driver Sep 5 06:01:38.758326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 06:01:38.758333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 06:01:38.758340 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 06:01:38.758347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 06:01:38.758354 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 06:01:38.758361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 06:01:38.758368 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 06:01:38.758375 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 06:01:38.758382 kernel: ACPI: Added _OSI(Module Device) Sep 5 06:01:38.758390 kernel: ACPI: Added _OSI(Processor Device) Sep 5 06:01:38.758397 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 06:01:38.758404 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 06:01:38.758410 kernel: ACPI: Interpreter enabled Sep 5 06:01:38.758417 kernel: ACPI: Using GIC for interrupt routing Sep 5 06:01:38.758424 kernel: ACPI: MCFG table detected, 1 entries Sep 5 06:01:38.758431 kernel: ACPI: CPU0 has been hot-added Sep 5 06:01:38.758438 kernel: ACPI: CPU1 has been hot-added Sep 5 06:01:38.758445 kernel: ACPI: CPU2 has been hot-added Sep 5 06:01:38.758451 kernel: ACPI: CPU3 has been hot-added Sep 5 06:01:38.758459 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 06:01:38.758466 kernel: printk: legacy console [ttyAMA0] enabled Sep 5 06:01:38.758473 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 06:01:38.758655 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 06:01:38.758731 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 06:01:38.758791 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 06:01:38.758850 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 06:01:38.758912 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 06:01:38.758921 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 06:01:38.758928 kernel: PCI host bridge to bus 0000:00 Sep 5 06:01:38.758991 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 06:01:38.759044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 06:01:38.759094 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 06:01:38.759144 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 06:01:38.759220 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 5 06:01:38.759286 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 06:01:38.759346 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 5 06:01:38.759403 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 5 06:01:38.759461 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 06:01:38.759518 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 5 06:01:38.759588 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 5 06:01:38.759649 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 5 06:01:38.759701 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 06:01:38.759764 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 06:01:38.759816 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 06:01:38.759825 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 06:01:38.759832 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 06:01:38.759839 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 06:01:38.759848 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 06:01:38.759855 kernel: iommu: Default domain type: Translated Sep 5 06:01:38.759862 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 06:01:38.759869 kernel: efivars: Registered efivars operations Sep 5 06:01:38.759876 kernel: vgaarb: loaded Sep 5 06:01:38.759883 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 06:01:38.759890 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 06:01:38.759898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 06:01:38.759905 kernel: pnp: PnP ACPI init Sep 5 06:01:38.759972 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 06:01:38.759982 kernel: pnp: PnP ACPI: found 1 devices Sep 5 06:01:38.759989 kernel: NET: Registered PF_INET protocol family Sep 5 06:01:38.759996 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 06:01:38.760003 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 06:01:38.760010 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 06:01:38.760018 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 06:01:38.760024 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 06:01:38.760033 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 06:01:38.760041 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:01:38.760047 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:01:38.760054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 06:01:38.760062 kernel: PCI: CLS 0 bytes, default 64 Sep 5 06:01:38.760068 kernel: kvm [1]: HYP mode not available Sep 5 06:01:38.760076 kernel: Initialise system trusted keyrings Sep 5 06:01:38.760083 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 06:01:38.760089 kernel: Key type asymmetric registered Sep 5 06:01:38.760097 kernel: Asymmetric key parser 'x509' registered Sep 5 06:01:38.760104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 5 06:01:38.760111 kernel: io scheduler mq-deadline registered Sep 5 06:01:38.760118 kernel: io scheduler kyber registered Sep 5 06:01:38.760125 kernel: io scheduler bfq registered Sep 5 06:01:38.760132 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 06:01:38.760139 kernel: ACPI: button: Power Button [PWRB] Sep 5 06:01:38.760147 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 06:01:38.760204 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 06:01:38.760215 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 06:01:38.760222 kernel: thunder_xcv, ver 1.0 Sep 5 06:01:38.760228 kernel: thunder_bgx, ver 1.0 Sep 5 06:01:38.760235 kernel: nicpf, ver 1.0 Sep 5 06:01:38.760242 kernel: nicvf, ver 1.0 Sep 5 06:01:38.760308 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 06:01:38.760362 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T06:01:38 UTC (1757052098) Sep 5 06:01:38.760371 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 06:01:38.760378 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 5 06:01:38.760386 kernel: watchdog: NMI not fully supported Sep 5 06:01:38.760393 kernel: watchdog: Hard watchdog permanently disabled Sep 5 06:01:38.760400 kernel: NET: Registered PF_INET6 protocol family Sep 5 06:01:38.760407 kernel: Segment Routing with IPv6 Sep 5 06:01:38.760413 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 06:01:38.760420 kernel: NET: Registered PF_PACKET protocol family Sep 5 06:01:38.760427 kernel: Key type dns_resolver registered Sep 5 06:01:38.760434 kernel: registered taskstats version 1 Sep 5 06:01:38.760441 kernel: Loading compiled-in X.509 certificates Sep 5 06:01:38.760449 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: aa317a1d4cc75a128b85a6fc319190bc5853ac85' Sep 5 06:01:38.760456 kernel: Demotion targets for Node 0: null Sep 5 06:01:38.760463 kernel: Key type .fscrypt registered Sep 5 06:01:38.760470 kernel: Key type fscrypt-provisioning registered Sep 5 06:01:38.760477 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 06:01:38.760484 kernel: ima: Allocated hash algorithm: sha1 Sep 5 06:01:38.760491 kernel: ima: No architecture policies found Sep 5 06:01:38.760498 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 06:01:38.760506 kernel: clk: Disabling unused clocks Sep 5 06:01:38.760513 kernel: PM: genpd: Disabling unused power domains Sep 5 06:01:38.760520 kernel: Warning: unable to open an initial console. Sep 5 06:01:38.760527 kernel: Freeing unused kernel memory: 38912K Sep 5 06:01:38.760534 kernel: Run /init as init process Sep 5 06:01:38.760541 kernel: with arguments: Sep 5 06:01:38.760569 kernel: /init Sep 5 06:01:38.760576 kernel: with environment: Sep 5 06:01:38.760583 kernel: HOME=/ Sep 5 06:01:38.760590 kernel: TERM=linux Sep 5 06:01:38.760599 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 06:01:38.760607 systemd[1]: Successfully made /usr/ read-only. Sep 5 06:01:38.760617 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:01:38.760625 systemd[1]: Detected virtualization kvm. Sep 5 06:01:38.760632 systemd[1]: Detected architecture arm64. Sep 5 06:01:38.760639 systemd[1]: Running in initrd. Sep 5 06:01:38.760646 systemd[1]: No hostname configured, using default hostname. Sep 5 06:01:38.760656 systemd[1]: Hostname set to . Sep 5 06:01:38.760663 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:01:38.760671 systemd[1]: Queued start job for default target initrd.target. Sep 5 06:01:38.760678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:01:38.760686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:01:38.760694 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 06:01:38.760701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:01:38.760708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 06:01:38.760726 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 06:01:38.760734 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 06:01:38.760742 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 06:01:38.760749 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:01:38.760757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:01:38.760764 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:01:38.760771 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:01:38.760781 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:01:38.760788 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:01:38.760796 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:01:38.760803 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:01:38.760811 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 06:01:38.760819 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 06:01:38.760826 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:01:38.760834 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:01:38.760843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:01:38.760850 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:01:38.760858 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 06:01:38.760866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:01:38.760874 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 06:01:38.760881 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 06:01:38.760889 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 06:01:38.760896 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:01:38.760903 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:01:38.760912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:01:38.760920 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 06:01:38.760928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:01:38.760935 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 06:01:38.760944 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 06:01:38.760970 systemd-journald[245]: Collecting audit messages is disabled. Sep 5 06:01:38.760989 systemd-journald[245]: Journal started Sep 5 06:01:38.761008 systemd-journald[245]: Runtime Journal (/run/log/journal/9017167902ee456d9798f205f1ff4b5f) is 6M, max 48.5M, 42.4M free. Sep 5 06:01:38.756189 systemd-modules-load[246]: Inserted module 'overlay' Sep 5 06:01:38.769280 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 06:01:38.769303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:01:38.770638 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 5 06:01:38.772088 kernel: Bridge firewalling registered Sep 5 06:01:38.772105 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:01:38.773150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:01:38.775069 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 06:01:38.778023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 06:01:38.779535 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:01:38.782704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:01:38.787040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:01:38.795958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:01:38.796223 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 06:01:38.797216 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:01:38.799054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:01:38.801654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:01:38.805014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:01:38.812037 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 06:01:38.826096 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ad0560d5d82b42c8405832aa39f4f52a20b919c503afe4e7ecc72adb2e451fae Sep 5 06:01:38.840381 systemd-resolved[286]: Positive Trust Anchors: Sep 5 06:01:38.840399 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:01:38.840434 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:01:38.847230 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 5 06:01:38.848171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:01:38.850612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:01:38.904578 kernel: SCSI subsystem initialized Sep 5 06:01:38.909562 kernel: Loading iSCSI transport class v2.0-870. Sep 5 06:01:38.917578 kernel: iscsi: registered transport (tcp) Sep 5 06:01:38.930591 kernel: iscsi: registered transport (qla4xxx) Sep 5 06:01:38.930633 kernel: QLogic iSCSI HBA Driver Sep 5 06:01:38.948110 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:01:38.969884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:01:38.972284 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:01:39.014001 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 06:01:39.016259 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 06:01:39.075576 kernel: raid6: neonx8 gen() 15609 MB/s Sep 5 06:01:39.092559 kernel: raid6: neonx4 gen() 15647 MB/s Sep 5 06:01:39.109563 kernel: raid6: neonx2 gen() 12861 MB/s Sep 5 06:01:39.126554 kernel: raid6: neonx1 gen() 10397 MB/s Sep 5 06:01:39.143561 kernel: raid6: int64x8 gen() 6851 MB/s Sep 5 06:01:39.160555 kernel: raid6: int64x4 gen() 7311 MB/s Sep 5 06:01:39.177554 kernel: raid6: int64x2 gen() 6064 MB/s Sep 5 06:01:39.194556 kernel: raid6: int64x1 gen() 5031 MB/s Sep 5 06:01:39.194575 kernel: raid6: using algorithm neonx4 gen() 15647 MB/s Sep 5 06:01:39.211584 kernel: raid6: .... xor() 12284 MB/s, rmw enabled Sep 5 06:01:39.211617 kernel: raid6: using neon recovery algorithm Sep 5 06:01:39.216639 kernel: xor: measuring software checksum speed Sep 5 06:01:39.216660 kernel: 8regs : 21036 MB/sec Sep 5 06:01:39.217719 kernel: 32regs : 20844 MB/sec Sep 5 06:01:39.217742 kernel: arm64_neon : 27231 MB/sec Sep 5 06:01:39.217760 kernel: xor: using function: arm64_neon (27231 MB/sec) Sep 5 06:01:39.269574 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 06:01:39.277580 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:01:39.279797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:01:39.316465 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 5 06:01:39.321283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:01:39.323037 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 06:01:39.345858 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 5 06:01:39.368689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:01:39.370724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:01:39.418094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:01:39.420774 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 06:01:39.469561 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 06:01:39.474863 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 06:01:39.481687 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:01:39.481767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:01:39.488188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:01:39.493261 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 06:01:39.493280 kernel: GPT:9289727 != 19775487 Sep 5 06:01:39.493289 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 06:01:39.493298 kernel: GPT:9289727 != 19775487 Sep 5 06:01:39.493306 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 06:01:39.493315 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:01:39.491129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:01:39.516979 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 06:01:39.528205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 06:01:39.530053 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 06:01:39.531059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:01:39.542570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 06:01:39.543466 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 06:01:39.551516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:01:39.552433 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:01:39.554163 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:01:39.555741 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:01:39.558088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 06:01:39.559582 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 06:01:39.578095 disk-uuid[589]: Primary Header is updated. Sep 5 06:01:39.578095 disk-uuid[589]: Secondary Entries is updated. Sep 5 06:01:39.578095 disk-uuid[589]: Secondary Header is updated. Sep 5 06:01:39.581573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:01:39.582579 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:01:40.592512 disk-uuid[593]: The operation has completed successfully. Sep 5 06:01:40.593491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:01:40.624440 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 06:01:40.624566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 06:01:40.641989 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 06:01:40.659243 sh[609]: Success Sep 5 06:01:40.672141 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 06:01:40.672175 kernel: device-mapper: uevent: version 1.0.3 Sep 5 06:01:40.672192 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 06:01:40.679572 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 5 06:01:40.705480 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 06:01:40.708097 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 06:01:40.724773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 06:01:40.730559 kernel: BTRFS: device fsid 9394a7fb-1948-4797-93d7-fc7ecccd6bdf devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (621) Sep 5 06:01:40.732551 kernel: BTRFS info (device dm-0): first mount of filesystem 9394a7fb-1948-4797-93d7-fc7ecccd6bdf Sep 5 06:01:40.732567 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:01:40.736565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 06:01:40.736585 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 06:01:40.737098 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 06:01:40.738164 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:01:40.739337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 06:01:40.740106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 06:01:40.743024 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 06:01:40.770282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Sep 5 06:01:40.770323 kernel: BTRFS info (device vda6): first mount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:01:40.770333 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:01:40.773627 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:01:40.773664 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:01:40.777557 kernel: BTRFS info (device vda6): last unmount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:01:40.779617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 06:01:40.781383 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 06:01:40.842038 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:01:40.845333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:01:40.886476 systemd-networkd[797]: lo: Link UP Sep 5 06:01:40.887157 systemd-networkd[797]: lo: Gained carrier Sep 5 06:01:40.887294 ignition[695]: Ignition 2.22.0 Sep 5 06:01:40.888244 systemd-networkd[797]: Enumeration completed Sep 5 06:01:40.887301 ignition[695]: Stage: fetch-offline Sep 5 06:01:40.888346 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:01:40.887331 ignition[695]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:01:40.888711 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:01:40.887339 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:01:40.888716 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:01:40.887419 ignition[695]: parsed url from cmdline: "" Sep 5 06:01:40.889582 systemd[1]: Reached target network.target - Network. Sep 5 06:01:40.887422 ignition[695]: no config URL provided Sep 5 06:01:40.890230 systemd-networkd[797]: eth0: Link UP Sep 5 06:01:40.887426 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 06:01:40.890638 systemd-networkd[797]: eth0: Gained carrier Sep 5 06:01:40.887432 ignition[695]: no config at "/usr/lib/ignition/user.ign" Sep 5 06:01:40.890647 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:01:40.887448 ignition[695]: op(1): [started] loading QEMU firmware config module Sep 5 06:01:40.887452 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 06:01:40.894335 ignition[695]: op(1): [finished] loading QEMU firmware config module Sep 5 06:01:40.916588 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:01:40.943626 ignition[695]: parsing config with SHA512: 45cd484be37ed1dcf98780f5100d1e089f2d106f24f498813f2d0e7a18f914e2dc0e0f36e24ca2d46680bb60c8bc7c16131c60c3f95162c4586239488672943c Sep 5 06:01:40.948805 unknown[695]: fetched base config from "system" Sep 5 06:01:40.948816 unknown[695]: fetched user config from "qemu" Sep 5 06:01:40.949182 ignition[695]: fetch-offline: fetch-offline passed Sep 5 06:01:40.949232 ignition[695]: Ignition finished successfully Sep 5 06:01:40.951505 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:01:40.952875 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 06:01:40.953587 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 06:01:40.990118 ignition[812]: Ignition 2.22.0 Sep 5 06:01:40.990133 ignition[812]: Stage: kargs Sep 5 06:01:40.990258 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:01:40.990266 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:01:40.990972 ignition[812]: kargs: kargs passed Sep 5 06:01:40.991010 ignition[812]: Ignition finished successfully Sep 5 06:01:40.993221 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 06:01:40.995347 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 06:01:41.025532 ignition[820]: Ignition 2.22.0 Sep 5 06:01:41.025565 ignition[820]: Stage: disks Sep 5 06:01:41.025692 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:01:41.025700 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:01:41.029284 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 06:01:41.026440 ignition[820]: disks: disks passed Sep 5 06:01:41.030241 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 06:01:41.026482 ignition[820]: Ignition finished successfully Sep 5 06:01:41.031502 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 06:01:41.032746 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:01:41.034169 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:01:41.035314 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:01:41.037527 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 06:01:41.061384 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 06:01:41.066498 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 06:01:41.068504 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 06:01:41.129523 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 06:01:41.130692 kernel: EXT4-fs (vda9): mounted filesystem f4f5d9cb-0abd-4bb7-89fa-b5d1beb281ac r/w with ordered data mode. Quota mode: none. Sep 5 06:01:41.130530 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 06:01:41.132476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:01:41.133971 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 06:01:41.134747 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 06:01:41.134787 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 06:01:41.134807 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:01:41.150958 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 06:01:41.153111 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 06:01:41.157398 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 5 06:01:41.157418 kernel: BTRFS info (device vda6): first mount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:01:41.157433 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:01:41.158564 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:01:41.158583 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:01:41.159461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:01:41.185489 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 06:01:41.189133 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 5 06:01:41.193114 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 06:01:41.196654 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 06:01:41.260315 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 06:01:41.262388 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 06:01:41.263855 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 06:01:41.277564 kernel: BTRFS info (device vda6): last unmount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:01:41.287104 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 06:01:41.306789 ignition[954]: INFO : Ignition 2.22.0 Sep 5 06:01:41.306789 ignition[954]: INFO : Stage: mount Sep 5 06:01:41.308061 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:01:41.308061 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:01:41.308061 ignition[954]: INFO : mount: mount passed Sep 5 06:01:41.308061 ignition[954]: INFO : Ignition finished successfully Sep 5 06:01:41.310840 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 06:01:41.313213 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 06:01:41.864195 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 06:01:41.865631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:01:41.881424 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Sep 5 06:01:41.881453 kernel: BTRFS info (device vda6): first mount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:01:41.881464 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:01:41.884557 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:01:41.884577 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:01:41.885445 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:01:41.915518 ignition[984]: INFO : Ignition 2.22.0 Sep 5 06:01:41.915518 ignition[984]: INFO : Stage: files Sep 5 06:01:41.916765 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:01:41.916765 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:01:41.916765 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Sep 5 06:01:41.919501 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 06:01:41.919501 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 06:01:41.919501 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 06:01:41.919501 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 06:01:41.919501 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 06:01:41.919163 unknown[984]: wrote ssh authorized keys file for user: core Sep 5 06:01:41.925348 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 06:01:41.925348 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 06:01:41.971227 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 06:01:42.052643 systemd-networkd[797]: eth0: Gained IPv6LL Sep 5 06:01:42.319214 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:01:42.320782 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:01:42.332595 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:01:42.332595 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:01:42.332595 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 06:01:42.337765 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 06:01:42.337765 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 06:01:42.337765 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 06:01:42.686682 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 06:01:43.024889 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 06:01:43.024889 ignition[984]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 06:01:43.028193 ignition[984]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 06:01:43.043135 ignition[984]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:01:43.046424 ignition[984]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:01:43.047625 ignition[984]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 06:01:43.047625 ignition[984]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 06:01:43.047625 ignition[984]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 06:01:43.047625 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:01:43.047625 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:01:43.047625 ignition[984]: INFO : files: files passed Sep 5 06:01:43.047625 ignition[984]: INFO : Ignition finished successfully Sep 5 06:01:43.048728 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 06:01:43.050528 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 06:01:43.052244 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 06:01:43.063516 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 06:01:43.063618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 06:01:43.066290 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 06:01:43.068385 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:01:43.068385 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:01:43.071044 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:01:43.071737 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:01:43.073494 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 06:01:43.076857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 06:01:43.128635 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 06:01:43.128779 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 06:01:43.130492 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 06:01:43.131922 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 06:01:43.133353 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 06:01:43.134155 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 06:01:43.147566 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:01:43.149633 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 06:01:43.168495 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:01:43.169468 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:01:43.171123 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 06:01:43.172597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 06:01:43.172729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:01:43.174893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 06:01:43.176531 systemd[1]: Stopped target basic.target - Basic System. Sep 5 06:01:43.177892 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 06:01:43.179208 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:01:43.180660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 06:01:43.182183 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:01:43.183637 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 06:01:43.185242 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:01:43.186814 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 06:01:43.188395 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 06:01:43.189866 systemd[1]: Stopped target swap.target - Swaps. Sep 5 06:01:43.191108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 06:01:43.191230 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:01:43.193140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:01:43.194602 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:01:43.196156 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 06:01:43.199600 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:01:43.200539 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 06:01:43.200673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 06:01:43.202959 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 06:01:43.203070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:01:43.204514 systemd[1]: Stopped target paths.target - Path Units. Sep 5 06:01:43.205794 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 06:01:43.209611 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:01:43.210591 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 06:01:43.212310 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 06:01:43.213524 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 06:01:43.213623 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:01:43.214844 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 06:01:43.214918 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:01:43.216104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 06:01:43.216208 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:01:43.217505 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 06:01:43.217616 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 06:01:43.219565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 06:01:43.220936 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 06:01:43.221058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:01:43.245941 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 06:01:43.246618 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 06:01:43.246768 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:01:43.248215 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 06:01:43.248304 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:01:43.253425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 06:01:43.253501 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 06:01:43.262218 ignition[1040]: INFO : Ignition 2.22.0 Sep 5 06:01:43.262218 ignition[1040]: INFO : Stage: umount Sep 5 06:01:43.263830 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:01:43.263830 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:01:43.263830 ignition[1040]: INFO : umount: umount passed Sep 5 06:01:43.263830 ignition[1040]: INFO : Ignition finished successfully Sep 5 06:01:43.263110 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 06:01:43.264897 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 06:01:43.264986 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 06:01:43.266437 systemd[1]: Stopped target network.target - Network. Sep 5 06:01:43.267379 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 06:01:43.267430 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 06:01:43.268763 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 06:01:43.268803 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 06:01:43.270203 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 06:01:43.270247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 06:01:43.271577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 06:01:43.271614 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 06:01:43.273869 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 06:01:43.275139 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 06:01:43.283251 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 06:01:43.283363 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 06:01:43.286656 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 06:01:43.286904 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 06:01:43.286940 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:01:43.290037 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 06:01:43.292281 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 06:01:43.292407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 06:01:43.295319 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 06:01:43.295473 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 06:01:43.297150 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 06:01:43.297183 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:01:43.299562 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 06:01:43.301255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 06:01:43.301313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:01:43.302909 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 06:01:43.302950 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:01:43.305338 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 06:01:43.305381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 06:01:43.306913 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:01:43.311084 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 06:01:43.322818 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 06:01:43.324669 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 06:01:43.325662 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 06:01:43.325718 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 06:01:43.327221 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 06:01:43.327310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 06:01:43.329053 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 06:01:43.329181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:01:43.330728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 06:01:43.330764 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 06:01:43.332002 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 06:01:43.332034 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:01:43.333531 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 06:01:43.333596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:01:43.335871 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 06:01:43.335917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 06:01:43.338142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 06:01:43.338192 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:01:43.341162 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 06:01:43.342479 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 06:01:43.342528 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:01:43.345539 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 06:01:43.345593 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:01:43.348371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:01:43.348412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:01:43.365776 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 06:01:43.366637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 06:01:43.367743 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 06:01:43.369212 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 06:01:43.394357 systemd[1]: Switching root. Sep 5 06:01:43.421475 systemd-journald[245]: Journal stopped Sep 5 06:01:44.155458 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 5 06:01:44.155503 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 06:01:44.155520 kernel: SELinux: policy capability open_perms=1 Sep 5 06:01:44.155529 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 06:01:44.155538 kernel: SELinux: policy capability always_check_network=0 Sep 5 06:01:44.155568 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 06:01:44.155580 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 06:01:44.155592 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 06:01:44.155601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 06:01:44.155610 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 06:01:44.155619 kernel: audit: type=1403 audit(1757052103.591:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 06:01:44.155630 systemd[1]: Successfully loaded SELinux policy in 58.380ms. Sep 5 06:01:44.155650 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.057ms. Sep 5 06:01:44.155661 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:01:44.155676 systemd[1]: Detected virtualization kvm. Sep 5 06:01:44.155687 systemd[1]: Detected architecture arm64. Sep 5 06:01:44.155706 systemd[1]: Detected first boot. Sep 5 06:01:44.155716 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:01:44.155726 zram_generator::config[1084]: No configuration found. Sep 5 06:01:44.155737 kernel: NET: Registered PF_VSOCK protocol family Sep 5 06:01:44.155747 systemd[1]: Populated /etc with preset unit settings. Sep 5 06:01:44.155758 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 06:01:44.155768 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 06:01:44.155780 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 06:01:44.155789 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 06:01:44.155799 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 06:01:44.155809 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 06:01:44.155819 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 06:01:44.155829 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 06:01:44.155839 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 06:01:44.155848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 06:01:44.155858 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 06:01:44.155869 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 06:01:44.155879 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:01:44.155889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:01:44.155901 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 06:01:44.155911 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 06:01:44.155921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 06:01:44.155931 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:01:44.155942 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 06:01:44.155952 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:01:44.155963 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:01:44.155973 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 06:01:44.155983 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 06:01:44.155993 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 06:01:44.156006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 06:01:44.156016 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:01:44.156027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:01:44.156037 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:01:44.156049 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:01:44.156059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 06:01:44.156069 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 06:01:44.156079 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 06:01:44.156089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:01:44.156099 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:01:44.156109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:01:44.156119 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 06:01:44.156129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 06:01:44.156144 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 06:01:44.156155 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 06:01:44.156165 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 06:01:44.156174 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 06:01:44.156184 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 06:01:44.156194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 06:01:44.156205 systemd[1]: Reached target machines.target - Containers. Sep 5 06:01:44.156218 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 06:01:44.156229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:01:44.156239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:01:44.156249 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 06:01:44.156259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:01:44.156269 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:01:44.156279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:01:44.156289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 06:01:44.156298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:01:44.156309 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 06:01:44.156321 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 06:01:44.156330 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 06:01:44.156340 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 06:01:44.156349 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 06:01:44.156360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:01:44.156369 kernel: fuse: init (API version 7.41) Sep 5 06:01:44.156379 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:01:44.156388 kernel: loop: module loaded Sep 5 06:01:44.156399 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:01:44.156410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:01:44.156420 kernel: ACPI: bus type drm_connector registered Sep 5 06:01:44.156430 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 06:01:44.156440 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 06:01:44.156450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:01:44.156462 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 06:01:44.156473 systemd[1]: Stopped verity-setup.service. Sep 5 06:01:44.156483 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 06:01:44.156494 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 06:01:44.156503 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 06:01:44.156513 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 06:01:44.156524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 06:01:44.156534 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 06:01:44.156553 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:01:44.156585 systemd-journald[1149]: Collecting audit messages is disabled. Sep 5 06:01:44.156608 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 06:01:44.156620 systemd-journald[1149]: Journal started Sep 5 06:01:44.156644 systemd-journald[1149]: Runtime Journal (/run/log/journal/9017167902ee456d9798f205f1ff4b5f) is 6M, max 48.5M, 42.4M free. Sep 5 06:01:43.944226 systemd[1]: Queued start job for default target multi-user.target. Sep 5 06:01:43.973344 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 06:01:43.973756 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 06:01:44.158254 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:01:44.159887 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 06:01:44.160087 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 06:01:44.161363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:01:44.161540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:01:44.162796 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:01:44.162954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:01:44.165865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:01:44.166025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:01:44.167456 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 06:01:44.167663 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 06:01:44.168823 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:01:44.168972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:01:44.170191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:01:44.171489 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:01:44.173109 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 06:01:44.174483 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 06:01:44.186639 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:01:44.188766 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 06:01:44.190566 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 06:01:44.191501 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 06:01:44.191536 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:01:44.193234 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 06:01:44.197283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 06:01:44.198360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:01:44.199668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 06:01:44.201640 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 06:01:44.202861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:01:44.204939 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 06:01:44.206146 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:01:44.207195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:01:44.209375 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 06:01:44.213084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 06:01:44.216294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:01:44.218281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 06:01:44.224842 systemd-journald[1149]: Time spent on flushing to /var/log/journal/9017167902ee456d9798f205f1ff4b5f is 22.085ms for 887 entries. Sep 5 06:01:44.224842 systemd-journald[1149]: System Journal (/var/log/journal/9017167902ee456d9798f205f1ff4b5f) is 8M, max 195.6M, 187.6M free. Sep 5 06:01:44.269064 systemd-journald[1149]: Received client request to flush runtime journal. Sep 5 06:01:44.269125 kernel: loop0: detected capacity change from 0 to 100608 Sep 5 06:01:44.269143 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 06:01:44.221394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 06:01:44.228399 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 06:01:44.235827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 06:01:44.239002 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 06:01:44.241599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:01:44.271935 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 06:01:44.280523 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 06:01:44.282271 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 06:01:44.284732 kernel: loop1: detected capacity change from 0 to 203944 Sep 5 06:01:44.286505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:01:44.311034 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 5 06:01:44.311051 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 5 06:01:44.314743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:01:44.323577 kernel: loop2: detected capacity change from 0 to 119320 Sep 5 06:01:44.354595 kernel: loop3: detected capacity change from 0 to 100608 Sep 5 06:01:44.360588 kernel: loop4: detected capacity change from 0 to 203944 Sep 5 06:01:44.365590 kernel: loop5: detected capacity change from 0 to 119320 Sep 5 06:01:44.373376 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 06:01:44.374096 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 5 06:01:44.377839 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 06:01:44.377858 systemd[1]: Reloading... Sep 5 06:01:44.442621 zram_generator::config[1245]: No configuration found. Sep 5 06:01:44.514110 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 06:01:44.583602 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 06:01:44.584071 systemd[1]: Reloading finished in 205 ms. Sep 5 06:01:44.616199 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 06:01:44.618157 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 06:01:44.628678 systemd[1]: Starting ensure-sysext.service... Sep 5 06:01:44.630266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:01:44.638842 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 5 06:01:44.638857 systemd[1]: Reloading... Sep 5 06:01:44.645120 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 06:01:44.645153 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 06:01:44.645388 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 06:01:44.645595 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 06:01:44.646208 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 06:01:44.646410 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 5 06:01:44.646457 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 5 06:01:44.648738 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:01:44.648750 systemd-tmpfiles[1285]: Skipping /boot Sep 5 06:01:44.654509 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:01:44.654528 systemd-tmpfiles[1285]: Skipping /boot Sep 5 06:01:44.690577 zram_generator::config[1312]: No configuration found. Sep 5 06:01:44.821684 systemd[1]: Reloading finished in 182 ms. Sep 5 06:01:44.842049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 06:01:44.847389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:01:44.860540 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:01:44.862528 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 06:01:44.869848 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 06:01:44.872482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:01:44.877516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:01:44.879860 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 06:01:44.885616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:01:44.892622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:01:44.895904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:01:44.898488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:01:44.900807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:01:44.900920 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:01:44.901949 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 06:01:44.903467 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 06:01:44.905088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:01:44.905233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:01:44.906895 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:01:44.907034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:01:44.909705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:01:44.909869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:01:44.918192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:01:44.918713 augenrules[1382]: No rules Sep 5 06:01:44.919312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:01:44.921142 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Sep 5 06:01:44.921262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:01:44.925778 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:01:44.926779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:01:44.926889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:01:44.934655 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 06:01:44.936821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 06:01:44.937752 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 06:01:44.938955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:01:44.941956 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:01:44.942181 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:01:44.944589 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 06:01:44.946930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:01:44.947072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:01:44.948702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:01:44.949166 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:01:44.951869 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:01:44.952018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:01:44.953384 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 06:01:44.969107 systemd[1]: Finished ensure-sysext.service. Sep 5 06:01:44.975765 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:01:44.977131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:01:44.978720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:01:44.982608 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:01:44.986106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:01:45.000609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:01:45.001778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:01:45.001825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:01:45.005806 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:01:45.009704 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 06:01:45.010647 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 06:01:45.030766 augenrules[1427]: /sbin/augenrules: No change Sep 5 06:01:45.031524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:01:45.031758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:01:45.033026 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:01:45.034633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:01:45.035924 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 06:01:45.037936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:01:45.038127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:01:45.039433 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:01:45.039624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:01:45.040379 augenrules[1459]: No rules Sep 5 06:01:45.041317 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:01:45.041487 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:01:45.061398 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 06:01:45.062509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:01:45.062585 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:01:45.074576 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:01:45.076706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 06:01:45.099531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 06:01:45.123934 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 06:01:45.125122 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 06:01:45.132620 systemd-networkd[1436]: lo: Link UP Sep 5 06:01:45.132631 systemd-networkd[1436]: lo: Gained carrier Sep 5 06:01:45.133410 systemd-networkd[1436]: Enumeration completed Sep 5 06:01:45.133503 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:01:45.134960 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:01:45.134968 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:01:45.136613 systemd-networkd[1436]: eth0: Link UP Sep 5 06:01:45.136764 systemd-networkd[1436]: eth0: Gained carrier Sep 5 06:01:45.136785 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:01:45.139817 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 06:01:45.143397 systemd-resolved[1352]: Positive Trust Anchors: Sep 5 06:01:45.143425 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:01:45.143458 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:01:45.144005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 06:01:45.150084 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 5 06:01:45.151630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:01:45.152703 systemd[1]: Reached target network.target - Network. Sep 5 06:01:45.153632 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:01:45.153758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:01:45.154694 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:01:45.155532 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 06:01:45.156002 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Sep 5 06:01:44.738755 systemd-journald[1149]: Time jumped backwards, rotating. Sep 5 06:01:44.720350 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 06:01:44.720404 systemd-timesyncd[1437]: Initial clock synchronization to Fri 2025-09-05 06:01:44.720253 UTC. Sep 5 06:01:44.721327 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 06:01:44.722242 systemd-resolved[1352]: Clock change detected. Flushing caches. Sep 5 06:01:44.723661 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 06:01:44.724968 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 06:01:44.726174 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 06:01:44.727130 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 06:01:44.727165 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:01:44.727937 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:01:44.730150 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 06:01:44.733568 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 06:01:44.736159 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 06:01:44.739489 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 06:01:44.740533 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 06:01:44.743955 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 06:01:44.745818 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 06:01:44.752182 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 06:01:44.753410 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 06:01:44.754873 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:01:44.755933 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:01:44.757451 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:01:44.757478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:01:44.759366 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 06:01:44.764454 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 06:01:44.768517 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 06:01:44.774027 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 06:01:44.777599 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 06:01:44.778369 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 06:01:44.786998 jq[1503]: false Sep 5 06:01:44.787398 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 06:01:44.790651 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 06:01:44.792384 extend-filesystems[1505]: Found /dev/vda6 Sep 5 06:01:44.792764 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 06:01:44.795609 extend-filesystems[1505]: Found /dev/vda9 Sep 5 06:01:44.797424 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 06:01:44.798443 extend-filesystems[1505]: Checking size of /dev/vda9 Sep 5 06:01:44.802374 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 06:01:44.804255 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 06:01:44.804776 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 06:01:44.805352 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 06:01:44.808316 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 06:01:44.815233 extend-filesystems[1505]: Resized partition /dev/vda9 Sep 5 06:01:44.812646 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 06:01:44.815602 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 06:01:44.815782 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 06:01:44.816652 extend-filesystems[1530]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 06:01:44.817669 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 06:01:44.817876 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 06:01:44.820286 jq[1524]: true Sep 5 06:01:44.825087 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 06:01:44.826250 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 06:01:44.830358 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 06:01:44.839088 update_engine[1520]: I20250905 06:01:44.836956 1520 main.cc:92] Flatcar Update Engine starting Sep 5 06:01:44.848438 jq[1533]: true Sep 5 06:01:44.844672 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 06:01:44.849535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:01:44.859511 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 06:01:44.864458 tar[1532]: linux-arm64/helm Sep 5 06:01:44.871572 extend-filesystems[1530]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 06:01:44.871572 extend-filesystems[1530]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 06:01:44.871572 extend-filesystems[1530]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 06:01:44.881788 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Sep 5 06:01:44.873089 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 06:01:44.882459 dbus-daemon[1501]: [system] SELinux support is enabled Sep 5 06:01:44.874355 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 06:01:44.882621 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 06:01:44.887999 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 06:01:44.888034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 06:01:44.889893 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 06:01:44.889919 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 06:01:44.891223 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 06:01:44.891492 update_engine[1520]: I20250905 06:01:44.891398 1520 update_check_scheduler.cc:74] Next update check in 3m53s Sep 5 06:01:44.892718 systemd-logind[1516]: New seat seat0. Sep 5 06:01:44.910730 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 06:01:44.914243 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Sep 5 06:01:44.920679 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 06:01:44.921261 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 06:01:44.925461 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:01:44.932947 systemd[1]: Started update-engine.service - Update Engine. Sep 5 06:01:44.936060 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 06:01:44.938906 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 06:01:45.021117 containerd[1540]: time="2025-09-05T06:01:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 06:01:45.021903 containerd[1540]: time="2025-09-05T06:01:45.021870804Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 06:01:45.027572 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 06:01:45.033217 containerd[1540]: time="2025-09-05T06:01:45.032972404Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.92µs" Sep 5 06:01:45.033217 containerd[1540]: time="2025-09-05T06:01:45.033012524Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 06:01:45.033217 containerd[1540]: time="2025-09-05T06:01:45.033031084Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 06:01:45.033217 containerd[1540]: time="2025-09-05T06:01:45.033178764Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 06:01:45.033365 containerd[1540]: time="2025-09-05T06:01:45.033344604Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 06:01:45.033435 containerd[1540]: time="2025-09-05T06:01:45.033422124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:01:45.033545 containerd[1540]: time="2025-09-05T06:01:45.033523364Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:01:45.033594 containerd[1540]: time="2025-09-05T06:01:45.033581484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:01:45.033880 containerd[1540]: time="2025-09-05T06:01:45.033855004Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:01:45.033951 containerd[1540]: time="2025-09-05T06:01:45.033936964Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:01:45.033997 containerd[1540]: time="2025-09-05T06:01:45.033985364Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:01:45.034037 containerd[1540]: time="2025-09-05T06:01:45.034026044Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 06:01:45.034161 containerd[1540]: time="2025-09-05T06:01:45.034143564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 06:01:45.034542 containerd[1540]: time="2025-09-05T06:01:45.034493084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:01:45.034572 containerd[1540]: time="2025-09-05T06:01:45.034543284Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:01:45.034572 containerd[1540]: time="2025-09-05T06:01:45.034554804Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 06:01:45.034603 containerd[1540]: time="2025-09-05T06:01:45.034589404Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 06:01:45.034813 containerd[1540]: time="2025-09-05T06:01:45.034797124Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 06:01:45.034877 containerd[1540]: time="2025-09-05T06:01:45.034862164Z" level=info msg="metadata content store policy set" policy=shared Sep 5 06:01:45.038414 containerd[1540]: time="2025-09-05T06:01:45.038323444Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 06:01:45.038502 containerd[1540]: time="2025-09-05T06:01:45.038485444Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 06:01:45.038583 containerd[1540]: time="2025-09-05T06:01:45.038508364Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 06:01:45.038583 containerd[1540]: time="2025-09-05T06:01:45.038572484Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 06:01:45.038641 containerd[1540]: time="2025-09-05T06:01:45.038594364Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 06:01:45.038641 containerd[1540]: time="2025-09-05T06:01:45.038609244Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 06:01:45.038641 containerd[1540]: time="2025-09-05T06:01:45.038621364Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 06:01:45.038641 containerd[1540]: time="2025-09-05T06:01:45.038633764Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 06:01:45.038698 containerd[1540]: time="2025-09-05T06:01:45.038647204Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 06:01:45.038698 containerd[1540]: time="2025-09-05T06:01:45.038658004Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 06:01:45.038698 containerd[1540]: time="2025-09-05T06:01:45.038667484Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 06:01:45.038742 containerd[1540]: time="2025-09-05T06:01:45.038721004Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 06:01:45.038964 containerd[1540]: time="2025-09-05T06:01:45.038938524Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 06:01:45.039033 containerd[1540]: time="2025-09-05T06:01:45.039016964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 06:01:45.039056 containerd[1540]: time="2025-09-05T06:01:45.039042644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 06:01:45.039073 containerd[1540]: time="2025-09-05T06:01:45.039056884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 06:01:45.039119 containerd[1540]: time="2025-09-05T06:01:45.039067844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 06:01:45.039137 containerd[1540]: time="2025-09-05T06:01:45.039123524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 06:01:45.039153 containerd[1540]: time="2025-09-05T06:01:45.039137524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 06:01:45.039153 containerd[1540]: time="2025-09-05T06:01:45.039148804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 06:01:45.039189 containerd[1540]: time="2025-09-05T06:01:45.039160644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 06:01:45.039189 containerd[1540]: time="2025-09-05T06:01:45.039171604Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 06:01:45.039265 containerd[1540]: time="2025-09-05T06:01:45.039245924Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 06:01:45.039570 containerd[1540]: time="2025-09-05T06:01:45.039537884Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 06:01:45.039601 containerd[1540]: time="2025-09-05T06:01:45.039574084Z" level=info msg="Start snapshots syncer" Sep 5 06:01:45.039619 containerd[1540]: time="2025-09-05T06:01:45.039601804Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 06:01:45.040542 containerd[1540]: time="2025-09-05T06:01:45.040477404Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 06:01:45.040681 containerd[1540]: time="2025-09-05T06:01:45.040559804Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 06:01:45.040681 containerd[1540]: time="2025-09-05T06:01:45.040658004Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 06:01:45.040849 containerd[1540]: time="2025-09-05T06:01:45.040781164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 06:01:45.040849 containerd[1540]: time="2025-09-05T06:01:45.040822884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 06:01:45.040849 containerd[1540]: time="2025-09-05T06:01:45.040841084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 06:01:45.040906 containerd[1540]: time="2025-09-05T06:01:45.040859244Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 06:01:45.040906 containerd[1540]: time="2025-09-05T06:01:45.040874884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 06:01:45.040906 containerd[1540]: time="2025-09-05T06:01:45.040887164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 06:01:45.040906 containerd[1540]: time="2025-09-05T06:01:45.040900484Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 06:01:45.040972 containerd[1540]: time="2025-09-05T06:01:45.040931124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 06:01:45.040972 containerd[1540]: time="2025-09-05T06:01:45.040947444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 06:01:45.040972 containerd[1540]: time="2025-09-05T06:01:45.040962124Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 06:01:45.041060 containerd[1540]: time="2025-09-05T06:01:45.041006484Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:01:45.041060 containerd[1540]: time="2025-09-05T06:01:45.041023524Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:01:45.041060 containerd[1540]: time="2025-09-05T06:01:45.041032844Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:01:45.041060 containerd[1540]: time="2025-09-05T06:01:45.041046644Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:01:45.041060 containerd[1540]: time="2025-09-05T06:01:45.041057484Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 06:01:45.041136 containerd[1540]: time="2025-09-05T06:01:45.041068324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 06:01:45.041136 containerd[1540]: time="2025-09-05T06:01:45.041086324Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 06:01:45.041170 containerd[1540]: time="2025-09-05T06:01:45.041165804Z" level=info msg="runtime interface created" Sep 5 06:01:45.041187 containerd[1540]: time="2025-09-05T06:01:45.041171444Z" level=info msg="created NRI interface" Sep 5 06:01:45.041187 containerd[1540]: time="2025-09-05T06:01:45.041183564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 06:01:45.041236 containerd[1540]: time="2025-09-05T06:01:45.041219164Z" level=info msg="Connect containerd service" Sep 5 06:01:45.041695 containerd[1540]: time="2025-09-05T06:01:45.041259164Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 06:01:45.042164 containerd[1540]: time="2025-09-05T06:01:45.042121404Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 06:01:45.108118 containerd[1540]: time="2025-09-05T06:01:45.108034524Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 06:01:45.108326 containerd[1540]: time="2025-09-05T06:01:45.108099964Z" level=info msg="Start subscribing containerd event" Sep 5 06:01:45.108371 containerd[1540]: time="2025-09-05T06:01:45.108356804Z" level=info msg="Start recovering state" Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108289804Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108439124Z" level=info msg="Start event monitor" Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108452124Z" level=info msg="Start cni network conf syncer for default" Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108460204Z" level=info msg="Start streaming server" Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108468644Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108475244Z" level=info msg="runtime interface starting up..." Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108480124Z" level=info msg="starting plugins..." Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108492284Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 06:01:45.110118 containerd[1540]: time="2025-09-05T06:01:45.108621804Z" level=info msg="containerd successfully booted in 0.087865s" Sep 5 06:01:45.108723 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 06:01:45.163622 tar[1532]: linux-arm64/LICENSE Sep 5 06:01:45.163731 tar[1532]: linux-arm64/README.md Sep 5 06:01:45.183147 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 06:01:45.435925 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 06:01:45.455042 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 06:01:45.457575 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 06:01:45.485765 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 06:01:45.486005 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 06:01:45.488391 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 06:01:45.518349 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 06:01:45.521739 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 06:01:45.524515 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 06:01:45.526079 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 06:01:46.608419 systemd-networkd[1436]: eth0: Gained IPv6LL Sep 5 06:01:46.612015 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 06:01:46.613524 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 06:01:46.615570 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 06:01:46.617534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:01:46.619306 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 06:01:46.650973 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 06:01:46.651154 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 06:01:46.652629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 06:01:46.653174 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 06:01:47.168050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:01:47.169563 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 06:01:47.170466 systemd[1]: Startup finished in 1.966s (kernel) + 4.984s (initrd) + 4.073s (userspace) = 11.025s. Sep 5 06:01:47.171732 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:01:47.543955 kubelet[1639]: E0905 06:01:47.543845 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:01:47.546035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:01:47.546168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:01:47.546519 systemd[1]: kubelet.service: Consumed 764ms CPU time, 256.8M memory peak. Sep 5 06:01:50.581502 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 06:01:50.582461 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:54450.service - OpenSSH per-connection server daemon (10.0.0.1:54450). Sep 5 06:01:50.695996 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 54450 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:50.697716 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:50.703483 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 06:01:50.704330 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 06:01:50.709274 systemd-logind[1516]: New session 1 of user core. Sep 5 06:01:50.725002 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 06:01:50.728459 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 06:01:50.750166 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 06:01:50.752580 systemd-logind[1516]: New session c1 of user core. Sep 5 06:01:50.857629 systemd[1658]: Queued start job for default target default.target. Sep 5 06:01:50.878107 systemd[1658]: Created slice app.slice - User Application Slice. Sep 5 06:01:50.878138 systemd[1658]: Reached target paths.target - Paths. Sep 5 06:01:50.878174 systemd[1658]: Reached target timers.target - Timers. Sep 5 06:01:50.879324 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 06:01:50.887956 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 06:01:50.888022 systemd[1658]: Reached target sockets.target - Sockets. Sep 5 06:01:50.888060 systemd[1658]: Reached target basic.target - Basic System. Sep 5 06:01:50.888090 systemd[1658]: Reached target default.target - Main User Target. Sep 5 06:01:50.888114 systemd[1658]: Startup finished in 130ms. Sep 5 06:01:50.888235 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 06:01:50.889519 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 06:01:50.955480 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:54454.service - OpenSSH per-connection server daemon (10.0.0.1:54454). Sep 5 06:01:50.997566 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 54454 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:50.998695 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:51.002795 systemd-logind[1516]: New session 2 of user core. Sep 5 06:01:51.014350 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 06:01:51.064822 sshd[1672]: Connection closed by 10.0.0.1 port 54454 Sep 5 06:01:51.065237 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 5 06:01:51.072013 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:54454.service: Deactivated successfully. Sep 5 06:01:51.074062 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 06:01:51.075186 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Sep 5 06:01:51.077027 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:54466.service - OpenSSH per-connection server daemon (10.0.0.1:54466). Sep 5 06:01:51.077914 systemd-logind[1516]: Removed session 2. Sep 5 06:01:51.126518 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 54466 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:51.127608 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:51.131796 systemd-logind[1516]: New session 3 of user core. Sep 5 06:01:51.141339 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 06:01:51.189566 sshd[1681]: Connection closed by 10.0.0.1 port 54466 Sep 5 06:01:51.189936 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Sep 5 06:01:51.211117 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:54466.service: Deactivated successfully. Sep 5 06:01:51.213595 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 06:01:51.214228 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Sep 5 06:01:51.216096 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:54474.service - OpenSSH per-connection server daemon (10.0.0.1:54474). Sep 5 06:01:51.216950 systemd-logind[1516]: Removed session 3. Sep 5 06:01:51.275241 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 54474 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:51.276283 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:51.280502 systemd-logind[1516]: New session 4 of user core. Sep 5 06:01:51.295372 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 06:01:51.345921 sshd[1690]: Connection closed by 10.0.0.1 port 54474 Sep 5 06:01:51.346213 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 5 06:01:51.355064 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:54474.service: Deactivated successfully. Sep 5 06:01:51.358449 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 06:01:51.359042 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Sep 5 06:01:51.361084 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:54488.service - OpenSSH per-connection server daemon (10.0.0.1:54488). Sep 5 06:01:51.361704 systemd-logind[1516]: Removed session 4. Sep 5 06:01:51.416488 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 54488 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:51.417600 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:51.421996 systemd-logind[1516]: New session 5 of user core. Sep 5 06:01:51.429401 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 06:01:51.488418 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 06:01:51.488718 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:01:51.499118 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 5 06:01:51.500698 sshd[1699]: Connection closed by 10.0.0.1 port 54488 Sep 5 06:01:51.501223 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 5 06:01:51.512187 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:54488.service: Deactivated successfully. Sep 5 06:01:51.514812 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 06:01:51.515712 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Sep 5 06:01:51.517998 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:54502.service - OpenSSH per-connection server daemon (10.0.0.1:54502). Sep 5 06:01:51.518735 systemd-logind[1516]: Removed session 5. Sep 5 06:01:51.571940 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 54502 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:51.573373 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:51.578810 systemd-logind[1516]: New session 6 of user core. Sep 5 06:01:51.589353 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 06:01:51.640850 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 06:01:51.641117 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:01:51.645482 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 5 06:01:51.649662 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 06:01:51.649899 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:01:51.657521 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:01:51.695487 augenrules[1733]: No rules Sep 5 06:01:51.696614 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:01:51.696835 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:01:51.698022 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 5 06:01:51.699284 sshd[1709]: Connection closed by 10.0.0.1 port 54502 Sep 5 06:01:51.699687 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 5 06:01:51.711156 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:54502.service: Deactivated successfully. Sep 5 06:01:51.712757 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 06:01:51.713593 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Sep 5 06:01:51.715919 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:54510.service - OpenSSH per-connection server daemon (10.0.0.1:54510). Sep 5 06:01:51.716533 systemd-logind[1516]: Removed session 6. Sep 5 06:01:51.777740 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 54510 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:01:51.778770 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:01:51.783040 systemd-logind[1516]: New session 7 of user core. Sep 5 06:01:51.798346 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 06:01:51.849439 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 06:01:51.849965 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:01:52.105382 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 06:01:52.117566 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 06:01:52.310844 dockerd[1768]: time="2025-09-05T06:01:52.310783204Z" level=info msg="Starting up" Sep 5 06:01:52.311558 dockerd[1768]: time="2025-09-05T06:01:52.311540004Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 06:01:52.320971 dockerd[1768]: time="2025-09-05T06:01:52.320941764Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 06:01:52.425395 dockerd[1768]: time="2025-09-05T06:01:52.425281844Z" level=info msg="Loading containers: start." Sep 5 06:01:52.433213 kernel: Initializing XFRM netlink socket Sep 5 06:01:52.608618 systemd-networkd[1436]: docker0: Link UP Sep 5 06:01:52.611535 dockerd[1768]: time="2025-09-05T06:01:52.611498764Z" level=info msg="Loading containers: done." Sep 5 06:01:52.623072 dockerd[1768]: time="2025-09-05T06:01:52.623024164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 06:01:52.623169 dockerd[1768]: time="2025-09-05T06:01:52.623093884Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 06:01:52.623169 dockerd[1768]: time="2025-09-05T06:01:52.623161724Z" level=info msg="Initializing buildkit" Sep 5 06:01:52.644455 dockerd[1768]: time="2025-09-05T06:01:52.644419004Z" level=info msg="Completed buildkit initialization" Sep 5 06:01:52.648985 dockerd[1768]: time="2025-09-05T06:01:52.648949244Z" level=info msg="Daemon has completed initialization" Sep 5 06:01:52.649173 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 06:01:52.649677 dockerd[1768]: time="2025-09-05T06:01:52.649098724Z" level=info msg="API listen on /run/docker.sock" Sep 5 06:01:53.195371 containerd[1540]: time="2025-09-05T06:01:53.195166564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 06:01:53.733663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694859656.mount: Deactivated successfully. Sep 5 06:01:54.854959 containerd[1540]: time="2025-09-05T06:01:54.854905924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:54.855460 containerd[1540]: time="2025-09-05T06:01:54.855429084Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 5 06:01:54.856181 containerd[1540]: time="2025-09-05T06:01:54.856153084Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:54.859446 containerd[1540]: time="2025-09-05T06:01:54.858985004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:54.859999 containerd[1540]: time="2025-09-05T06:01:54.859960564Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.6647364s" Sep 5 06:01:54.859999 containerd[1540]: time="2025-09-05T06:01:54.859998564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 06:01:54.861135 containerd[1540]: time="2025-09-05T06:01:54.861100444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 06:01:56.486120 containerd[1540]: time="2025-09-05T06:01:56.486067524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:56.486619 containerd[1540]: time="2025-09-05T06:01:56.486590324Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 5 06:01:56.487555 containerd[1540]: time="2025-09-05T06:01:56.487531004Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:56.490459 containerd[1540]: time="2025-09-05T06:01:56.490408364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:56.491204 containerd[1540]: time="2025-09-05T06:01:56.491168884Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.63003552s" Sep 5 06:01:56.491246 containerd[1540]: time="2025-09-05T06:01:56.491213164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 06:01:56.491714 containerd[1540]: time="2025-09-05T06:01:56.491681444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 06:01:57.796589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 06:01:57.798303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:01:57.799811 containerd[1540]: time="2025-09-05T06:01:57.799771724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:57.800651 containerd[1540]: time="2025-09-05T06:01:57.800370604Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 5 06:01:57.801426 containerd[1540]: time="2025-09-05T06:01:57.801391564Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:57.804269 containerd[1540]: time="2025-09-05T06:01:57.804243004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:57.805490 containerd[1540]: time="2025-09-05T06:01:57.805452004Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.313733s" Sep 5 06:01:57.805490 containerd[1540]: time="2025-09-05T06:01:57.805488044Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 06:01:57.806146 containerd[1540]: time="2025-09-05T06:01:57.806120404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 06:01:57.920181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:01:57.923702 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:01:57.967152 kubelet[2059]: E0905 06:01:57.967092 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:01:57.970369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:01:57.970521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:01:57.972300 systemd[1]: kubelet.service: Consumed 140ms CPU time, 105.6M memory peak. Sep 5 06:01:58.802069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount547601724.mount: Deactivated successfully. Sep 5 06:01:59.154015 containerd[1540]: time="2025-09-05T06:01:59.153909604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:59.154472 containerd[1540]: time="2025-09-05T06:01:59.154359684Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 5 06:01:59.155141 containerd[1540]: time="2025-09-05T06:01:59.155096244Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:59.157233 containerd[1540]: time="2025-09-05T06:01:59.157207404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:01:59.158152 containerd[1540]: time="2025-09-05T06:01:59.157741604Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.35158796s" Sep 5 06:01:59.158152 containerd[1540]: time="2025-09-05T06:01:59.157776004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 06:01:59.158310 containerd[1540]: time="2025-09-05T06:01:59.158167804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 06:01:59.714330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841310117.mount: Deactivated successfully. Sep 5 06:02:00.460831 containerd[1540]: time="2025-09-05T06:02:00.459922604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:00.460831 containerd[1540]: time="2025-09-05T06:02:00.460547644Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 5 06:02:00.461359 containerd[1540]: time="2025-09-05T06:02:00.461329724Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:00.464538 containerd[1540]: time="2025-09-05T06:02:00.464500444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:00.466294 containerd[1540]: time="2025-09-05T06:02:00.466246364Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3080538s" Sep 5 06:02:00.466334 containerd[1540]: time="2025-09-05T06:02:00.466299044Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 06:02:00.466928 containerd[1540]: time="2025-09-05T06:02:00.466684164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 06:02:00.873698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370394252.mount: Deactivated successfully. Sep 5 06:02:00.879001 containerd[1540]: time="2025-09-05T06:02:00.878960164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:02:00.879656 containerd[1540]: time="2025-09-05T06:02:00.879421284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 06:02:00.880377 containerd[1540]: time="2025-09-05T06:02:00.880338444Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:02:00.882294 containerd[1540]: time="2025-09-05T06:02:00.882247804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:02:00.883075 containerd[1540]: time="2025-09-05T06:02:00.883049924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 416.33672ms" Sep 5 06:02:00.883359 containerd[1540]: time="2025-09-05T06:02:00.883147764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 06:02:00.883652 containerd[1540]: time="2025-09-05T06:02:00.883629364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 06:02:01.349945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069008316.mount: Deactivated successfully. Sep 5 06:02:03.651330 containerd[1540]: time="2025-09-05T06:02:03.651270124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:03.651777 containerd[1540]: time="2025-09-05T06:02:03.651748404Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 5 06:02:03.652769 containerd[1540]: time="2025-09-05T06:02:03.652694644Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:03.655427 containerd[1540]: time="2025-09-05T06:02:03.655400724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:03.656645 containerd[1540]: time="2025-09-05T06:02:03.656446724Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.7727744s" Sep 5 06:02:03.656645 containerd[1540]: time="2025-09-05T06:02:03.656477404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 06:02:07.975396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 06:02:07.977056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:02:07.980944 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 06:02:07.981014 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 06:02:07.981220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:02:07.983991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:02:08.001998 systemd[1]: Reload requested from client PID 2216 ('systemctl') (unit session-7.scope)... Sep 5 06:02:08.002013 systemd[1]: Reloading... Sep 5 06:02:08.067300 zram_generator::config[2263]: No configuration found. Sep 5 06:02:08.231010 systemd[1]: Reloading finished in 228 ms. Sep 5 06:02:08.281779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:02:08.284355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:02:08.285363 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 06:02:08.286279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:02:08.286323 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.2M memory peak. Sep 5 06:02:08.287927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:02:08.433970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:02:08.441514 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:02:08.475577 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:02:08.475577 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 06:02:08.475577 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:02:08.475888 kubelet[2307]: I0905 06:02:08.475609 2307 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:02:09.097801 kubelet[2307]: I0905 06:02:09.097756 2307 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 06:02:09.097801 kubelet[2307]: I0905 06:02:09.097787 2307 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:02:09.098056 kubelet[2307]: I0905 06:02:09.098026 2307 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 06:02:09.120454 kubelet[2307]: E0905 06:02:09.120421 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:02:09.124018 kubelet[2307]: I0905 06:02:09.123986 2307 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:02:09.133590 kubelet[2307]: I0905 06:02:09.133562 2307 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:02:09.136954 kubelet[2307]: I0905 06:02:09.136932 2307 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:02:09.137690 kubelet[2307]: I0905 06:02:09.137661 2307 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 06:02:09.137832 kubelet[2307]: I0905 06:02:09.137796 2307 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:02:09.137979 kubelet[2307]: I0905 06:02:09.137824 2307 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:02:09.138063 kubelet[2307]: I0905 06:02:09.138039 2307 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:02:09.138063 kubelet[2307]: I0905 06:02:09.138051 2307 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 06:02:09.138323 kubelet[2307]: I0905 06:02:09.138295 2307 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:02:09.140666 kubelet[2307]: I0905 06:02:09.140187 2307 kubelet.go:408] "Attempting to sync node with API server" Sep 5 06:02:09.140666 kubelet[2307]: I0905 06:02:09.140234 2307 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:02:09.140666 kubelet[2307]: I0905 06:02:09.140253 2307 kubelet.go:314] "Adding apiserver pod source" Sep 5 06:02:09.140666 kubelet[2307]: I0905 06:02:09.140330 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:02:09.142055 kubelet[2307]: W0905 06:02:09.141999 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 5 06:02:09.142109 kubelet[2307]: E0905 06:02:09.142064 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:02:09.142386 kubelet[2307]: W0905 06:02:09.142346 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 5 06:02:09.142484 kubelet[2307]: E0905 06:02:09.142467 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:02:09.143714 kubelet[2307]: I0905 06:02:09.143697 2307 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:02:09.144464 kubelet[2307]: I0905 06:02:09.144447 2307 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 06:02:09.144695 kubelet[2307]: W0905 06:02:09.144685 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 06:02:09.145696 kubelet[2307]: I0905 06:02:09.145676 2307 server.go:1274] "Started kubelet" Sep 5 06:02:09.146603 kubelet[2307]: I0905 06:02:09.146547 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:02:09.146741 kubelet[2307]: I0905 06:02:09.146697 2307 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:02:09.146816 kubelet[2307]: I0905 06:02:09.146796 2307 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:02:09.147768 kubelet[2307]: I0905 06:02:09.147751 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:02:09.148058 kubelet[2307]: I0905 06:02:09.148038 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:02:09.148165 kubelet[2307]: I0905 06:02:09.148140 2307 server.go:449] "Adding debug handlers to kubelet server" Sep 5 06:02:09.148800 kubelet[2307]: E0905 06:02:09.147830 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18624d9a2991f924 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 06:02:09.145649444 +0000 UTC m=+0.701301721,LastTimestamp:2025-09-05 06:02:09.145649444 +0000 UTC m=+0.701301721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 06:02:09.149810 kubelet[2307]: I0905 06:02:09.149568 2307 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 06:02:09.149810 kubelet[2307]: I0905 06:02:09.149704 2307 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 06:02:09.149810 kubelet[2307]: I0905 06:02:09.149765 2307 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:02:09.150104 kubelet[2307]: E0905 06:02:09.150073 2307 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:02:09.150104 kubelet[2307]: I0905 06:02:09.150097 2307 factory.go:221] Registration of the systemd container factory successfully Sep 5 06:02:09.150307 kubelet[2307]: I0905 06:02:09.150212 2307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:02:09.150307 kubelet[2307]: W0905 06:02:09.150179 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 5 06:02:09.150307 kubelet[2307]: E0905 06:02:09.150281 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:02:09.150524 kubelet[2307]: E0905 06:02:09.150486 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:02:09.150696 kubelet[2307]: E0905 06:02:09.150660 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Sep 5 06:02:09.151148 kubelet[2307]: I0905 06:02:09.151132 2307 factory.go:221] Registration of the containerd container factory successfully Sep 5 06:02:09.160094 kubelet[2307]: I0905 06:02:09.160075 2307 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 06:02:09.160296 kubelet[2307]: I0905 06:02:09.160180 2307 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 06:02:09.160296 kubelet[2307]: I0905 06:02:09.160218 2307 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:02:09.161872 kubelet[2307]: I0905 06:02:09.161788 2307 policy_none.go:49] "None policy: Start" Sep 5 06:02:09.162713 kubelet[2307]: I0905 06:02:09.162465 2307 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 06:02:09.162713 kubelet[2307]: I0905 06:02:09.162490 2307 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:02:09.164396 kubelet[2307]: I0905 06:02:09.164357 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 06:02:09.165326 kubelet[2307]: I0905 06:02:09.165304 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 06:02:09.165326 kubelet[2307]: I0905 06:02:09.165328 2307 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 06:02:09.165388 kubelet[2307]: I0905 06:02:09.165343 2307 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 06:02:09.165388 kubelet[2307]: E0905 06:02:09.165378 2307 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:02:09.166669 kubelet[2307]: W0905 06:02:09.166626 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Sep 5 06:02:09.166729 kubelet[2307]: E0905 06:02:09.166676 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:02:09.172239 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 06:02:09.181914 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 06:02:09.185069 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 06:02:09.208178 kubelet[2307]: I0905 06:02:09.208020 2307 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 06:02:09.208287 kubelet[2307]: I0905 06:02:09.208245 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:02:09.208287 kubelet[2307]: I0905 06:02:09.208261 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:02:09.208493 kubelet[2307]: I0905 06:02:09.208438 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:02:09.210072 kubelet[2307]: E0905 06:02:09.210048 2307 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 06:02:09.273583 systemd[1]: Created slice kubepods-burstable-pod5d184c6beeae86739ad6f1f903438e33.slice - libcontainer container kubepods-burstable-pod5d184c6beeae86739ad6f1f903438e33.slice. Sep 5 06:02:09.292770 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 06:02:09.309842 kubelet[2307]: I0905 06:02:09.309811 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 06:02:09.311033 kubelet[2307]: E0905 06:02:09.310563 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 5 06:02:09.313792 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 06:02:09.351670 kubelet[2307]: E0905 06:02:09.351586 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Sep 5 06:02:09.451073 kubelet[2307]: I0905 06:02:09.451015 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:09.451073 kubelet[2307]: I0905 06:02:09.451056 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:09.451073 kubelet[2307]: I0905 06:02:09.451075 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:09.451298 kubelet[2307]: I0905 06:02:09.451095 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d184c6beeae86739ad6f1f903438e33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d184c6beeae86739ad6f1f903438e33\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:09.451298 kubelet[2307]: I0905 06:02:09.451123 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d184c6beeae86739ad6f1f903438e33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d184c6beeae86739ad6f1f903438e33\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:09.451298 kubelet[2307]: I0905 06:02:09.451137 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:09.451298 kubelet[2307]: I0905 06:02:09.451152 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:09.451298 kubelet[2307]: I0905 06:02:09.451166 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:02:09.451394 kubelet[2307]: I0905 06:02:09.451217 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d184c6beeae86739ad6f1f903438e33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d184c6beeae86739ad6f1f903438e33\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:09.512802 kubelet[2307]: I0905 06:02:09.512722 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 06:02:09.513135 kubelet[2307]: E0905 06:02:09.513105 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Sep 5 06:02:09.591503 containerd[1540]: time="2025-09-05T06:02:09.591455524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d184c6beeae86739ad6f1f903438e33,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:09.609036 containerd[1540]: time="2025-09-05T06:02:09.608925244Z" level=info msg="connecting to shim bb86b2d979a91cf70556e40fb9409143b96ff2d7b31fb07829432898ecb882d9" address="unix:///run/containerd/s/be941e01311a19b92c6a0a4933458fc55ef76b0bf4d8a30778250b177a2fd2bc" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:09.612250 containerd[1540]: time="2025-09-05T06:02:09.612030124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:09.615835 containerd[1540]: time="2025-09-05T06:02:09.615805844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:09.633383 systemd[1]: Started cri-containerd-bb86b2d979a91cf70556e40fb9409143b96ff2d7b31fb07829432898ecb882d9.scope - libcontainer container bb86b2d979a91cf70556e40fb9409143b96ff2d7b31fb07829432898ecb882d9. Sep 5 06:02:09.642420 containerd[1540]: time="2025-09-05T06:02:09.642370484Z" level=info msg="connecting to shim 0780748f42c682d33993c9e94dab048979e6c8a08eedcd2c170ffe2ef0918356" address="unix:///run/containerd/s/47d4fc044f9f5a828fdf7f046f79106defbf64c5f645f4ff71ba142bbc4597cd" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:09.644493 containerd[1540]: time="2025-09-05T06:02:09.644438524Z" level=info msg="connecting to shim 78e328548084def6057f0207a64148b855834e58026d5c169f5ffbd868e6b74a" address="unix:///run/containerd/s/7c7db7e3a2b17bb24811812881e9c6efc0a5b1e0e9111cdb16b90f5235dbf4dd" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:09.667380 systemd[1]: Started cri-containerd-78e328548084def6057f0207a64148b855834e58026d5c169f5ffbd868e6b74a.scope - libcontainer container 78e328548084def6057f0207a64148b855834e58026d5c169f5ffbd868e6b74a. Sep 5 06:02:09.671290 systemd[1]: Started cri-containerd-0780748f42c682d33993c9e94dab048979e6c8a08eedcd2c170ffe2ef0918356.scope - libcontainer container 0780748f42c682d33993c9e94dab048979e6c8a08eedcd2c170ffe2ef0918356. Sep 5 06:02:09.679856 containerd[1540]: time="2025-09-05T06:02:09.679812444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d184c6beeae86739ad6f1f903438e33,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb86b2d979a91cf70556e40fb9409143b96ff2d7b31fb07829432898ecb882d9\"" Sep 5 06:02:09.683415 containerd[1540]: time="2025-09-05T06:02:09.683278884Z" level=info msg="CreateContainer within sandbox \"bb86b2d979a91cf70556e40fb9409143b96ff2d7b31fb07829432898ecb882d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 06:02:09.697424 containerd[1540]: time="2025-09-05T06:02:09.697387124Z" level=info msg="Container 0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:09.706249 containerd[1540]: time="2025-09-05T06:02:09.706179844Z" level=info msg="CreateContainer within sandbox \"bb86b2d979a91cf70556e40fb9409143b96ff2d7b31fb07829432898ecb882d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b\"" Sep 5 06:02:09.707231 containerd[1540]: time="2025-09-05T06:02:09.707178484Z" level=info msg="StartContainer for \"0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b\"" Sep 5 06:02:09.707975 containerd[1540]: time="2025-09-05T06:02:09.707537724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"78e328548084def6057f0207a64148b855834e58026d5c169f5ffbd868e6b74a\"" Sep 5 06:02:09.709793 containerd[1540]: time="2025-09-05T06:02:09.709741444Z" level=info msg="connecting to shim 0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b" address="unix:///run/containerd/s/be941e01311a19b92c6a0a4933458fc55ef76b0bf4d8a30778250b177a2fd2bc" protocol=ttrpc version=3 Sep 5 06:02:09.710335 containerd[1540]: time="2025-09-05T06:02:09.710304044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"0780748f42c682d33993c9e94dab048979e6c8a08eedcd2c170ffe2ef0918356\"" Sep 5 06:02:09.710498 containerd[1540]: time="2025-09-05T06:02:09.710471244Z" level=info msg="CreateContainer within sandbox \"78e328548084def6057f0207a64148b855834e58026d5c169f5ffbd868e6b74a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 06:02:09.717747 containerd[1540]: time="2025-09-05T06:02:09.717706844Z" level=info msg="CreateContainer within sandbox \"0780748f42c682d33993c9e94dab048979e6c8a08eedcd2c170ffe2ef0918356\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 06:02:09.718404 containerd[1540]: time="2025-09-05T06:02:09.718374684Z" level=info msg="Container 622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:09.726626 containerd[1540]: time="2025-09-05T06:02:09.726581124Z" level=info msg="Container 2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:09.730925 containerd[1540]: time="2025-09-05T06:02:09.730865204Z" level=info msg="CreateContainer within sandbox \"78e328548084def6057f0207a64148b855834e58026d5c169f5ffbd868e6b74a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072\"" Sep 5 06:02:09.731257 containerd[1540]: time="2025-09-05T06:02:09.731235684Z" level=info msg="StartContainer for \"622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072\"" Sep 5 06:02:09.732259 containerd[1540]: time="2025-09-05T06:02:09.732217964Z" level=info msg="connecting to shim 622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072" address="unix:///run/containerd/s/7c7db7e3a2b17bb24811812881e9c6efc0a5b1e0e9111cdb16b90f5235dbf4dd" protocol=ttrpc version=3 Sep 5 06:02:09.733961 containerd[1540]: time="2025-09-05T06:02:09.733920164Z" level=info msg="CreateContainer within sandbox \"0780748f42c682d33993c9e94dab048979e6c8a08eedcd2c170ffe2ef0918356\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612\"" Sep 5 06:02:09.734329 containerd[1540]: time="2025-09-05T06:02:09.734303364Z" level=info msg="StartContainer for \"2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612\"" Sep 5 06:02:09.735179 containerd[1540]: time="2025-09-05T06:02:09.735146084Z" level=info msg="connecting to shim 2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612" address="unix:///run/containerd/s/47d4fc044f9f5a828fdf7f046f79106defbf64c5f645f4ff71ba142bbc4597cd" protocol=ttrpc version=3 Sep 5 06:02:09.738389 systemd[1]: Started cri-containerd-0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b.scope - libcontainer container 0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b. Sep 5 06:02:09.752908 kubelet[2307]: E0905 06:02:09.752871 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Sep 5 06:02:09.759343 systemd[1]: Started cri-containerd-2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612.scope - libcontainer container 2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612. Sep 5 06:02:09.760278 systemd[1]: Started cri-containerd-622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072.scope - libcontainer container 622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072. Sep 5 06:02:09.803011 containerd[1540]: time="2025-09-05T06:02:09.802976244Z" level=info msg="StartContainer for \"0ffae01c267667d64bb30e774dbfed23aaf8d8a80c8ce9bc59ba3d137ed1245b\" returns successfully" Sep 5 06:02:09.803157 containerd[1540]: time="2025-09-05T06:02:09.803139084Z" level=info msg="StartContainer for \"2f92f9640fcc9a982dc8f9fa3ff6f25dec1b0d9063fe2c77a06e20ae050f5612\" returns successfully" Sep 5 06:02:09.808947 containerd[1540]: time="2025-09-05T06:02:09.808916604Z" level=info msg="StartContainer for \"622cc4b18950be23ef097251bed858bbac222b0ba2865b8ee2aeef111e793072\" returns successfully" Sep 5 06:02:09.915278 kubelet[2307]: I0905 06:02:09.914652 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 06:02:11.375714 kubelet[2307]: E0905 06:02:11.375664 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 06:02:11.471221 kubelet[2307]: I0905 06:02:11.471141 2307 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 06:02:12.141651 kubelet[2307]: I0905 06:02:12.141584 2307 apiserver.go:52] "Watching apiserver" Sep 5 06:02:12.150036 kubelet[2307]: I0905 06:02:12.149998 2307 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 06:02:12.183904 kubelet[2307]: E0905 06:02:12.183738 2307 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:13.375911 systemd[1]: Reload requested from client PID 2582 ('systemctl') (unit session-7.scope)... Sep 5 06:02:13.376148 systemd[1]: Reloading... Sep 5 06:02:13.432243 zram_generator::config[2628]: No configuration found. Sep 5 06:02:13.592997 systemd[1]: Reloading finished in 216 ms. Sep 5 06:02:13.622668 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:02:13.638642 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 06:02:13.640239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:02:13.640281 systemd[1]: kubelet.service: Consumed 1.031s CPU time, 130.4M memory peak. Sep 5 06:02:13.642127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:02:13.792543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:02:13.796325 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:02:13.827481 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:02:13.827481 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 06:02:13.827481 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:02:13.827770 kubelet[2667]: I0905 06:02:13.827531 2667 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:02:13.835068 kubelet[2667]: I0905 06:02:13.835017 2667 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 06:02:13.835068 kubelet[2667]: I0905 06:02:13.835039 2667 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:02:13.835752 kubelet[2667]: I0905 06:02:13.835736 2667 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 06:02:13.838419 kubelet[2667]: I0905 06:02:13.838401 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 06:02:13.840773 kubelet[2667]: I0905 06:02:13.840746 2667 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:02:13.844038 kubelet[2667]: I0905 06:02:13.844018 2667 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:02:13.846263 kubelet[2667]: I0905 06:02:13.846245 2667 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:02:13.846352 kubelet[2667]: I0905 06:02:13.846342 2667 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 06:02:13.846458 kubelet[2667]: I0905 06:02:13.846438 2667 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:02:13.846602 kubelet[2667]: I0905 06:02:13.846459 2667 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:02:13.846666 kubelet[2667]: I0905 06:02:13.846609 2667 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:02:13.846666 kubelet[2667]: I0905 06:02:13.846617 2667 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 06:02:13.846666 kubelet[2667]: I0905 06:02:13.846646 2667 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:02:13.846748 kubelet[2667]: I0905 06:02:13.846732 2667 kubelet.go:408] "Attempting to sync node with API server" Sep 5 06:02:13.846748 kubelet[2667]: I0905 06:02:13.846745 2667 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:02:13.846790 kubelet[2667]: I0905 06:02:13.846765 2667 kubelet.go:314] "Adding apiserver pod source" Sep 5 06:02:13.846790 kubelet[2667]: I0905 06:02:13.846778 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:02:13.848248 kubelet[2667]: I0905 06:02:13.847483 2667 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:02:13.848476 kubelet[2667]: I0905 06:02:13.848457 2667 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 06:02:13.851808 kubelet[2667]: I0905 06:02:13.851521 2667 server.go:1274] "Started kubelet" Sep 5 06:02:13.851808 kubelet[2667]: I0905 06:02:13.851780 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:02:13.852144 kubelet[2667]: I0905 06:02:13.852095 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:02:13.852494 kubelet[2667]: I0905 06:02:13.852480 2667 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:02:13.852812 kubelet[2667]: I0905 06:02:13.852775 2667 server.go:449] "Adding debug handlers to kubelet server" Sep 5 06:02:13.854214 kubelet[2667]: I0905 06:02:13.854181 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:02:13.855484 kubelet[2667]: I0905 06:02:13.855454 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:02:13.855884 kubelet[2667]: I0905 06:02:13.855851 2667 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 06:02:13.856067 kubelet[2667]: E0905 06:02:13.856046 2667 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:02:13.856439 kubelet[2667]: I0905 06:02:13.856421 2667 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 06:02:13.857104 kubelet[2667]: I0905 06:02:13.857088 2667 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:02:13.863820 kubelet[2667]: I0905 06:02:13.863788 2667 factory.go:221] Registration of the systemd container factory successfully Sep 5 06:02:13.863901 kubelet[2667]: I0905 06:02:13.863882 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:02:13.868090 kubelet[2667]: E0905 06:02:13.867991 2667 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:02:13.868346 kubelet[2667]: I0905 06:02:13.868316 2667 factory.go:221] Registration of the containerd container factory successfully Sep 5 06:02:13.884724 kubelet[2667]: I0905 06:02:13.884459 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 06:02:13.885526 kubelet[2667]: I0905 06:02:13.885494 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 06:02:13.885526 kubelet[2667]: I0905 06:02:13.885519 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 06:02:13.885526 kubelet[2667]: I0905 06:02:13.885534 2667 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 06:02:13.885641 kubelet[2667]: E0905 06:02:13.885571 2667 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:02:13.909266 kubelet[2667]: I0905 06:02:13.909175 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 06:02:13.909266 kubelet[2667]: I0905 06:02:13.909193 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 06:02:13.909266 kubelet[2667]: I0905 06:02:13.909232 2667 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:02:13.909382 kubelet[2667]: I0905 06:02:13.909353 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 06:02:13.909382 kubelet[2667]: I0905 06:02:13.909364 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 06:02:13.909382 kubelet[2667]: I0905 06:02:13.909379 2667 policy_none.go:49] "None policy: Start" Sep 5 06:02:13.910673 kubelet[2667]: I0905 06:02:13.910636 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 06:02:13.910673 kubelet[2667]: I0905 06:02:13.910662 2667 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:02:13.910802 kubelet[2667]: I0905 06:02:13.910787 2667 state_mem.go:75] "Updated machine memory state" Sep 5 06:02:13.915052 kubelet[2667]: I0905 06:02:13.914960 2667 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 06:02:13.915123 kubelet[2667]: I0905 06:02:13.915106 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:02:13.915150 kubelet[2667]: I0905 06:02:13.915123 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:02:13.915329 kubelet[2667]: I0905 06:02:13.915309 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:02:14.017534 kubelet[2667]: I0905 06:02:14.017507 2667 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 06:02:14.023989 kubelet[2667]: I0905 06:02:14.023962 2667 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 06:02:14.024061 kubelet[2667]: I0905 06:02:14.024032 2667 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 06:02:14.058121 kubelet[2667]: I0905 06:02:14.058068 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:14.058121 kubelet[2667]: I0905 06:02:14.058105 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:02:14.058121 kubelet[2667]: I0905 06:02:14.058125 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d184c6beeae86739ad6f1f903438e33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d184c6beeae86739ad6f1f903438e33\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:14.058388 kubelet[2667]: I0905 06:02:14.058142 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d184c6beeae86739ad6f1f903438e33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d184c6beeae86739ad6f1f903438e33\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:14.058388 kubelet[2667]: I0905 06:02:14.058163 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:14.058388 kubelet[2667]: I0905 06:02:14.058178 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:14.058388 kubelet[2667]: I0905 06:02:14.058223 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:14.058388 kubelet[2667]: I0905 06:02:14.058244 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:14.058482 kubelet[2667]: I0905 06:02:14.058259 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d184c6beeae86739ad6f1f903438e33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d184c6beeae86739ad6f1f903438e33\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:14.847839 kubelet[2667]: I0905 06:02:14.847809 2667 apiserver.go:52] "Watching apiserver" Sep 5 06:02:14.856806 kubelet[2667]: I0905 06:02:14.856768 2667 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 06:02:14.933291 kubelet[2667]: E0905 06:02:14.933256 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:02:14.934376 kubelet[2667]: E0905 06:02:14.934348 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 06:02:14.935986 kubelet[2667]: I0905 06:02:14.935938 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.935926644 podStartE2EDuration="1.935926644s" podCreationTimestamp="2025-09-05 06:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:02:14.935522124 +0000 UTC m=+1.136525921" watchObservedRunningTime="2025-09-05 06:02:14.935926644 +0000 UTC m=+1.136930441" Sep 5 06:02:14.948557 kubelet[2667]: I0905 06:02:14.948496 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.948484444 podStartE2EDuration="1.948484444s" podCreationTimestamp="2025-09-05 06:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:02:14.948451844 +0000 UTC m=+1.149455641" watchObservedRunningTime="2025-09-05 06:02:14.948484444 +0000 UTC m=+1.149488241" Sep 5 06:02:14.948653 kubelet[2667]: I0905 06:02:14.948604 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9485989639999999 podStartE2EDuration="1.948598964s" podCreationTimestamp="2025-09-05 06:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:02:14.942282804 +0000 UTC m=+1.143286601" watchObservedRunningTime="2025-09-05 06:02:14.948598964 +0000 UTC m=+1.149602801" Sep 5 06:02:19.847732 kubelet[2667]: I0905 06:02:19.847680 2667 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 06:02:19.848423 containerd[1540]: time="2025-09-05T06:02:19.848369027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 06:02:19.849240 kubelet[2667]: I0905 06:02:19.848657 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 06:02:20.429611 systemd[1]: Created slice kubepods-besteffort-podc3a17381_55dd_4f14_986e_61640db49364.slice - libcontainer container kubepods-besteffort-podc3a17381_55dd_4f14_986e_61640db49364.slice. Sep 5 06:02:20.505815 kubelet[2667]: I0905 06:02:20.505770 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3a17381-55dd-4f14-986e-61640db49364-kube-proxy\") pod \"kube-proxy-fkr26\" (UID: \"c3a17381-55dd-4f14-986e-61640db49364\") " pod="kube-system/kube-proxy-fkr26" Sep 5 06:02:20.506008 kubelet[2667]: I0905 06:02:20.505994 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3a17381-55dd-4f14-986e-61640db49364-lib-modules\") pod \"kube-proxy-fkr26\" (UID: \"c3a17381-55dd-4f14-986e-61640db49364\") " pod="kube-system/kube-proxy-fkr26" Sep 5 06:02:20.506128 kubelet[2667]: I0905 06:02:20.506100 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjl6t\" (UniqueName: \"kubernetes.io/projected/c3a17381-55dd-4f14-986e-61640db49364-kube-api-access-sjl6t\") pod \"kube-proxy-fkr26\" (UID: \"c3a17381-55dd-4f14-986e-61640db49364\") " pod="kube-system/kube-proxy-fkr26" Sep 5 06:02:20.506280 kubelet[2667]: I0905 06:02:20.506252 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3a17381-55dd-4f14-986e-61640db49364-xtables-lock\") pod \"kube-proxy-fkr26\" (UID: \"c3a17381-55dd-4f14-986e-61640db49364\") " pod="kube-system/kube-proxy-fkr26" Sep 5 06:02:20.613833 kubelet[2667]: E0905 06:02:20.613792 2667 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 06:02:20.613833 kubelet[2667]: E0905 06:02:20.613829 2667 projected.go:194] Error preparing data for projected volume kube-api-access-sjl6t for pod kube-system/kube-proxy-fkr26: configmap "kube-root-ca.crt" not found Sep 5 06:02:20.613974 kubelet[2667]: E0905 06:02:20.613884 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3a17381-55dd-4f14-986e-61640db49364-kube-api-access-sjl6t podName:c3a17381-55dd-4f14-986e-61640db49364 nodeName:}" failed. No retries permitted until 2025-09-05 06:02:21.113864435 +0000 UTC m=+7.314868232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sjl6t" (UniqueName: "kubernetes.io/projected/c3a17381-55dd-4f14-986e-61640db49364-kube-api-access-sjl6t") pod "kube-proxy-fkr26" (UID: "c3a17381-55dd-4f14-986e-61640db49364") : configmap "kube-root-ca.crt" not found Sep 5 06:02:20.910218 kubelet[2667]: I0905 06:02:20.909456 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vms\" (UniqueName: \"kubernetes.io/projected/a03b3297-6f27-4805-b08a-128f94883724-kube-api-access-68vms\") pod \"tigera-operator-58fc44c59b-xkqcp\" (UID: \"a03b3297-6f27-4805-b08a-128f94883724\") " pod="tigera-operator/tigera-operator-58fc44c59b-xkqcp" Sep 5 06:02:20.910218 kubelet[2667]: I0905 06:02:20.909497 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a03b3297-6f27-4805-b08a-128f94883724-var-lib-calico\") pod \"tigera-operator-58fc44c59b-xkqcp\" (UID: \"a03b3297-6f27-4805-b08a-128f94883724\") " pod="tigera-operator/tigera-operator-58fc44c59b-xkqcp" Sep 5 06:02:20.913628 systemd[1]: Created slice kubepods-besteffort-poda03b3297_6f27_4805_b08a_128f94883724.slice - libcontainer container kubepods-besteffort-poda03b3297_6f27_4805_b08a_128f94883724.slice. Sep 5 06:02:21.217389 containerd[1540]: time="2025-09-05T06:02:21.217356428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xkqcp,Uid:a03b3297-6f27-4805-b08a-128f94883724,Namespace:tigera-operator,Attempt:0,}" Sep 5 06:02:21.234356 containerd[1540]: time="2025-09-05T06:02:21.233891293Z" level=info msg="connecting to shim 05f15159dc0fa406950b9bdf73fb824a1908bbc245b8700441ffa17980f48b8d" address="unix:///run/containerd/s/80caac9a844686033695b6fc12379db9181f48f108e42beb775408ce1c227e8d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:21.256384 systemd[1]: Started cri-containerd-05f15159dc0fa406950b9bdf73fb824a1908bbc245b8700441ffa17980f48b8d.scope - libcontainer container 05f15159dc0fa406950b9bdf73fb824a1908bbc245b8700441ffa17980f48b8d. Sep 5 06:02:21.285262 containerd[1540]: time="2025-09-05T06:02:21.285105698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xkqcp,Uid:a03b3297-6f27-4805-b08a-128f94883724,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"05f15159dc0fa406950b9bdf73fb824a1908bbc245b8700441ffa17980f48b8d\"" Sep 5 06:02:21.287955 containerd[1540]: time="2025-09-05T06:02:21.287925716Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 06:02:21.350434 containerd[1540]: time="2025-09-05T06:02:21.350396112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkr26,Uid:c3a17381-55dd-4f14-986e-61640db49364,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:21.363796 containerd[1540]: time="2025-09-05T06:02:21.363622596Z" level=info msg="connecting to shim 17c8a4c6e3c8b2e4b14ae741748b6eda72134f0c38899c0c0fc667a26755c674" address="unix:///run/containerd/s/785a5a5bf001c684acbf6c7bcffed7e6b685f63d8239baa8645e0ee50b753c39" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:21.384411 systemd[1]: Started cri-containerd-17c8a4c6e3c8b2e4b14ae741748b6eda72134f0c38899c0c0fc667a26755c674.scope - libcontainer container 17c8a4c6e3c8b2e4b14ae741748b6eda72134f0c38899c0c0fc667a26755c674. Sep 5 06:02:21.404060 containerd[1540]: time="2025-09-05T06:02:21.404024172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fkr26,Uid:c3a17381-55dd-4f14-986e-61640db49364,Namespace:kube-system,Attempt:0,} returns sandbox id \"17c8a4c6e3c8b2e4b14ae741748b6eda72134f0c38899c0c0fc667a26755c674\"" Sep 5 06:02:21.409133 containerd[1540]: time="2025-09-05T06:02:21.408235239Z" level=info msg="CreateContainer within sandbox \"17c8a4c6e3c8b2e4b14ae741748b6eda72134f0c38899c0c0fc667a26755c674\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 06:02:21.415961 containerd[1540]: time="2025-09-05T06:02:21.415932528Z" level=info msg="Container 802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:21.422492 containerd[1540]: time="2025-09-05T06:02:21.422457169Z" level=info msg="CreateContainer within sandbox \"17c8a4c6e3c8b2e4b14ae741748b6eda72134f0c38899c0c0fc667a26755c674\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16\"" Sep 5 06:02:21.423056 containerd[1540]: time="2025-09-05T06:02:21.423028093Z" level=info msg="StartContainer for \"802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16\"" Sep 5 06:02:21.425700 containerd[1540]: time="2025-09-05T06:02:21.425669590Z" level=info msg="connecting to shim 802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16" address="unix:///run/containerd/s/785a5a5bf001c684acbf6c7bcffed7e6b685f63d8239baa8645e0ee50b753c39" protocol=ttrpc version=3 Sep 5 06:02:21.444371 systemd[1]: Started cri-containerd-802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16.scope - libcontainer container 802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16. Sep 5 06:02:21.481778 containerd[1540]: time="2025-09-05T06:02:21.481449663Z" level=info msg="StartContainer for \"802358cc2ccd3a98673d783bd341d7629e7d9bf8e99a8eb54cb563431ad9bf16\" returns successfully" Sep 5 06:02:22.841773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704839382.mount: Deactivated successfully. Sep 5 06:02:23.153751 containerd[1540]: time="2025-09-05T06:02:23.153655476Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:23.154540 containerd[1540]: time="2025-09-05T06:02:23.154510480Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 06:02:23.154931 containerd[1540]: time="2025-09-05T06:02:23.154908283Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:23.156945 containerd[1540]: time="2025-09-05T06:02:23.156917214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:23.158066 containerd[1540]: time="2025-09-05T06:02:23.158001340Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.870042744s" Sep 5 06:02:23.158105 containerd[1540]: time="2025-09-05T06:02:23.158064420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 06:02:23.162710 containerd[1540]: time="2025-09-05T06:02:23.162584805Z" level=info msg="CreateContainer within sandbox \"05f15159dc0fa406950b9bdf73fb824a1908bbc245b8700441ffa17980f48b8d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 06:02:23.168671 containerd[1540]: time="2025-09-05T06:02:23.168639839Z" level=info msg="Container 3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:23.174724 containerd[1540]: time="2025-09-05T06:02:23.174686273Z" level=info msg="CreateContainer within sandbox \"05f15159dc0fa406950b9bdf73fb824a1908bbc245b8700441ffa17980f48b8d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb\"" Sep 5 06:02:23.175439 containerd[1540]: time="2025-09-05T06:02:23.175400197Z" level=info msg="StartContainer for \"3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb\"" Sep 5 06:02:23.176153 containerd[1540]: time="2025-09-05T06:02:23.176125441Z" level=info msg="connecting to shim 3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb" address="unix:///run/containerd/s/80caac9a844686033695b6fc12379db9181f48f108e42beb775408ce1c227e8d" protocol=ttrpc version=3 Sep 5 06:02:23.197358 systemd[1]: Started cri-containerd-3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb.scope - libcontainer container 3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb. Sep 5 06:02:23.223736 containerd[1540]: time="2025-09-05T06:02:23.223689426Z" level=info msg="StartContainer for \"3ff139b9964ad923f32a2c1e39821dff4a9eafe96e63ae0c2be50d053e467ccb\" returns successfully" Sep 5 06:02:23.923786 kubelet[2667]: I0905 06:02:23.923688 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fkr26" podStartSLOduration=3.923664288 podStartE2EDuration="3.923664288s" podCreationTimestamp="2025-09-05 06:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:02:21.918188033 +0000 UTC m=+8.119191830" watchObservedRunningTime="2025-09-05 06:02:23.923664288 +0000 UTC m=+10.124668085" Sep 5 06:02:23.924110 kubelet[2667]: I0905 06:02:23.923791 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-xkqcp" podStartSLOduration=2.049591121 podStartE2EDuration="3.923786529s" podCreationTimestamp="2025-09-05 06:02:20 +0000 UTC" firstStartedPulling="2025-09-05 06:02:21.286301546 +0000 UTC m=+7.487305343" lastFinishedPulling="2025-09-05 06:02:23.160496994 +0000 UTC m=+9.361500751" observedRunningTime="2025-09-05 06:02:23.923534687 +0000 UTC m=+10.124538484" watchObservedRunningTime="2025-09-05 06:02:23.923786529 +0000 UTC m=+10.124790326" Sep 5 06:02:28.289766 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 5 06:02:28.290978 sshd[1745]: Connection closed by 10.0.0.1 port 54510 Sep 5 06:02:28.291486 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 5 06:02:28.295719 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:54510.service: Deactivated successfully. Sep 5 06:02:28.298914 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 06:02:28.300427 systemd[1]: session-7.scope: Consumed 6.108s CPU time, 219.9M memory peak. Sep 5 06:02:28.302673 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Sep 5 06:02:28.306277 systemd-logind[1516]: Removed session 7. Sep 5 06:02:30.147229 update_engine[1520]: I20250905 06:02:30.146732 1520 update_attempter.cc:509] Updating boot flags... Sep 5 06:02:34.201478 systemd[1]: Created slice kubepods-besteffort-podbed7ebd6_1a10_4609_9c7c_011569619318.slice - libcontainer container kubepods-besteffort-podbed7ebd6_1a10_4609_9c7c_011569619318.slice. Sep 5 06:02:34.292821 kubelet[2667]: I0905 06:02:34.292783 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bed7ebd6-1a10-4609-9c7c-011569619318-typha-certs\") pod \"calico-typha-558c77b744-f27j9\" (UID: \"bed7ebd6-1a10-4609-9c7c-011569619318\") " pod="calico-system/calico-typha-558c77b744-f27j9" Sep 5 06:02:34.292821 kubelet[2667]: I0905 06:02:34.292829 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed7ebd6-1a10-4609-9c7c-011569619318-tigera-ca-bundle\") pod \"calico-typha-558c77b744-f27j9\" (UID: \"bed7ebd6-1a10-4609-9c7c-011569619318\") " pod="calico-system/calico-typha-558c77b744-f27j9" Sep 5 06:02:34.293150 kubelet[2667]: I0905 06:02:34.292848 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxg56\" (UniqueName: \"kubernetes.io/projected/bed7ebd6-1a10-4609-9c7c-011569619318-kube-api-access-zxg56\") pod \"calico-typha-558c77b744-f27j9\" (UID: \"bed7ebd6-1a10-4609-9c7c-011569619318\") " pod="calico-system/calico-typha-558c77b744-f27j9" Sep 5 06:02:34.363493 kubelet[2667]: W0905 06:02:34.363400 2667 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 5 06:02:34.367959 systemd[1]: Created slice kubepods-besteffort-podebc2629c_ffd3_4431_a6e1_674bbb102e98.slice - libcontainer container kubepods-besteffort-podebc2629c_ffd3_4431_a6e1_674bbb102e98.slice. Sep 5 06:02:34.370539 kubelet[2667]: E0905 06:02:34.370485 2667 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 5 06:02:34.495476 kubelet[2667]: I0905 06:02:34.495231 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-lib-modules\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495476 kubelet[2667]: I0905 06:02:34.495309 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqx4\" (UniqueName: \"kubernetes.io/projected/ebc2629c-ffd3-4431-a6e1-674bbb102e98-kube-api-access-wtqx4\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495476 kubelet[2667]: I0905 06:02:34.495337 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-policysync\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495476 kubelet[2667]: I0905 06:02:34.495352 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-cni-bin-dir\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495476 kubelet[2667]: I0905 06:02:34.495366 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-cni-log-dir\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495655 kubelet[2667]: I0905 06:02:34.495381 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-cni-net-dir\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495655 kubelet[2667]: I0905 06:02:34.495397 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc2629c-ffd3-4431-a6e1-674bbb102e98-tigera-ca-bundle\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495655 kubelet[2667]: I0905 06:02:34.495414 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-xtables-lock\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495655 kubelet[2667]: I0905 06:02:34.495456 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-flexvol-driver-host\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495655 kubelet[2667]: I0905 06:02:34.495493 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ebc2629c-ffd3-4431-a6e1-674bbb102e98-node-certs\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495748 kubelet[2667]: I0905 06:02:34.495520 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-var-run-calico\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.495748 kubelet[2667]: I0905 06:02:34.495539 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ebc2629c-ffd3-4431-a6e1-674bbb102e98-var-lib-calico\") pod \"calico-node-sljdn\" (UID: \"ebc2629c-ffd3-4431-a6e1-674bbb102e98\") " pod="calico-system/calico-node-sljdn" Sep 5 06:02:34.506119 containerd[1540]: time="2025-09-05T06:02:34.506076232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-558c77b744-f27j9,Uid:bed7ebd6-1a10-4609-9c7c-011569619318,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:34.559417 kubelet[2667]: E0905 06:02:34.559227 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46zb7" podUID="cf96125f-d73a-4df4-8958-6a721d1e7275" Sep 5 06:02:34.569399 containerd[1540]: time="2025-09-05T06:02:34.569332725Z" level=info msg="connecting to shim 94bbe279a95131239f78af851292e71d8455fd90e21d3c9839a3cff37d928c58" address="unix:///run/containerd/s/7b24842a91f9da69f5548d33401db48ba527065f07a8e8a57490ed88081b7f60" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:34.597257 kubelet[2667]: E0905 06:02:34.596581 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.597257 kubelet[2667]: W0905 06:02:34.596615 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.597257 kubelet[2667]: E0905 06:02:34.596634 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.598101 kubelet[2667]: E0905 06:02:34.598084 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.598165 kubelet[2667]: W0905 06:02:34.598152 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.598259 kubelet[2667]: E0905 06:02:34.598246 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.607958 kubelet[2667]: E0905 06:02:34.607438 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.608044 kubelet[2667]: W0905 06:02:34.608031 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.608107 kubelet[2667]: E0905 06:02:34.608092 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.626397 systemd[1]: Started cri-containerd-94bbe279a95131239f78af851292e71d8455fd90e21d3c9839a3cff37d928c58.scope - libcontainer container 94bbe279a95131239f78af851292e71d8455fd90e21d3c9839a3cff37d928c58. Sep 5 06:02:34.675349 containerd[1540]: time="2025-09-05T06:02:34.675304015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-558c77b744-f27j9,Uid:bed7ebd6-1a10-4609-9c7c-011569619318,Namespace:calico-system,Attempt:0,} returns sandbox id \"94bbe279a95131239f78af851292e71d8455fd90e21d3c9839a3cff37d928c58\"" Sep 5 06:02:34.687400 containerd[1540]: time="2025-09-05T06:02:34.687366648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 06:02:34.697934 kubelet[2667]: E0905 06:02:34.697910 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.698149 kubelet[2667]: W0905 06:02:34.698038 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.698149 kubelet[2667]: E0905 06:02:34.698064 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.698149 kubelet[2667]: I0905 06:02:34.698097 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2nf\" (UniqueName: \"kubernetes.io/projected/cf96125f-d73a-4df4-8958-6a721d1e7275-kube-api-access-wt2nf\") pod \"csi-node-driver-46zb7\" (UID: \"cf96125f-d73a-4df4-8958-6a721d1e7275\") " pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:34.698459 kubelet[2667]: E0905 06:02:34.698436 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.698584 kubelet[2667]: W0905 06:02:34.698512 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.698584 kubelet[2667]: E0905 06:02:34.698538 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.698584 kubelet[2667]: I0905 06:02:34.698556 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cf96125f-d73a-4df4-8958-6a721d1e7275-socket-dir\") pod \"csi-node-driver-46zb7\" (UID: \"cf96125f-d73a-4df4-8958-6a721d1e7275\") " pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:34.699090 kubelet[2667]: E0905 06:02:34.699039 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.699090 kubelet[2667]: W0905 06:02:34.699082 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.699426 kubelet[2667]: E0905 06:02:34.699101 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.699426 kubelet[2667]: E0905 06:02:34.699336 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.699426 kubelet[2667]: W0905 06:02:34.699348 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.699426 kubelet[2667]: E0905 06:02:34.699364 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.699850 kubelet[2667]: E0905 06:02:34.699829 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.699850 kubelet[2667]: W0905 06:02:34.699845 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.700217 kubelet[2667]: E0905 06:02:34.699861 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.700536 kubelet[2667]: E0905 06:02:34.700513 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.700536 kubelet[2667]: W0905 06:02:34.700531 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.700897 kubelet[2667]: E0905 06:02:34.700701 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.700897 kubelet[2667]: I0905 06:02:34.700735 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cf96125f-d73a-4df4-8958-6a721d1e7275-registration-dir\") pod \"csi-node-driver-46zb7\" (UID: \"cf96125f-d73a-4df4-8958-6a721d1e7275\") " pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:34.701090 kubelet[2667]: E0905 06:02:34.701073 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.701171 kubelet[2667]: W0905 06:02:34.701149 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.701218 kubelet[2667]: E0905 06:02:34.701183 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.701489 kubelet[2667]: E0905 06:02:34.701469 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.701489 kubelet[2667]: W0905 06:02:34.701482 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.701918 kubelet[2667]: E0905 06:02:34.701496 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.702023 kubelet[2667]: E0905 06:02:34.701891 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.702080 kubelet[2667]: W0905 06:02:34.702034 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.702080 kubelet[2667]: E0905 06:02:34.702056 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.702220 kubelet[2667]: I0905 06:02:34.702109 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf96125f-d73a-4df4-8958-6a721d1e7275-kubelet-dir\") pod \"csi-node-driver-46zb7\" (UID: \"cf96125f-d73a-4df4-8958-6a721d1e7275\") " pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:34.703011 kubelet[2667]: E0905 06:02:34.702992 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.703148 kubelet[2667]: W0905 06:02:34.703076 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.703148 kubelet[2667]: E0905 06:02:34.703107 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.703658 kubelet[2667]: E0905 06:02:34.703462 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.703658 kubelet[2667]: W0905 06:02:34.703557 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.703658 kubelet[2667]: E0905 06:02:34.703576 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.703962 kubelet[2667]: E0905 06:02:34.703750 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.703962 kubelet[2667]: W0905 06:02:34.703771 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.703962 kubelet[2667]: E0905 06:02:34.703790 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.703962 kubelet[2667]: I0905 06:02:34.703808 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cf96125f-d73a-4df4-8958-6a721d1e7275-varrun\") pod \"csi-node-driver-46zb7\" (UID: \"cf96125f-d73a-4df4-8958-6a721d1e7275\") " pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:34.704230 kubelet[2667]: E0905 06:02:34.704187 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.704230 kubelet[2667]: W0905 06:02:34.704228 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.704368 kubelet[2667]: E0905 06:02:34.704322 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.704502 kubelet[2667]: E0905 06:02:34.704487 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.704502 kubelet[2667]: W0905 06:02:34.704500 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.704563 kubelet[2667]: E0905 06:02:34.704515 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.704645 kubelet[2667]: E0905 06:02:34.704633 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.704645 kubelet[2667]: W0905 06:02:34.704643 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.704699 kubelet[2667]: E0905 06:02:34.704650 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.704787 kubelet[2667]: E0905 06:02:34.704770 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.704787 kubelet[2667]: W0905 06:02:34.704781 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.704830 kubelet[2667]: E0905 06:02:34.704788 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.804999 kubelet[2667]: E0905 06:02:34.804805 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.804999 kubelet[2667]: W0905 06:02:34.804828 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.804999 kubelet[2667]: E0905 06:02:34.804845 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.805369 kubelet[2667]: E0905 06:02:34.805354 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.807259 kubelet[2667]: W0905 06:02:34.805429 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.807375 kubelet[2667]: E0905 06:02:34.807362 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.807684 kubelet[2667]: E0905 06:02:34.807594 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.807684 kubelet[2667]: W0905 06:02:34.807608 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.807684 kubelet[2667]: E0905 06:02:34.807635 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.807836 kubelet[2667]: E0905 06:02:34.807824 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.807883 kubelet[2667]: W0905 06:02:34.807873 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.807937 kubelet[2667]: E0905 06:02:34.807926 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.808334 kubelet[2667]: E0905 06:02:34.808305 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.808334 kubelet[2667]: W0905 06:02:34.808323 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.809100 kubelet[2667]: E0905 06:02:34.809055 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.809340 kubelet[2667]: E0905 06:02:34.809312 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.809340 kubelet[2667]: W0905 06:02:34.809333 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.809408 kubelet[2667]: E0905 06:02:34.809351 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.809652 kubelet[2667]: E0905 06:02:34.809627 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.809652 kubelet[2667]: W0905 06:02:34.809641 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.810740 kubelet[2667]: E0905 06:02:34.810719 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.811521 kubelet[2667]: E0905 06:02:34.811503 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.811521 kubelet[2667]: W0905 06:02:34.811519 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.811616 kubelet[2667]: E0905 06:02:34.811593 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.811691 kubelet[2667]: E0905 06:02:34.811678 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.811719 kubelet[2667]: W0905 06:02:34.811691 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.811745 kubelet[2667]: E0905 06:02:34.811718 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.811929 kubelet[2667]: E0905 06:02:34.811913 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.812012 kubelet[2667]: W0905 06:02:34.811928 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.812012 kubelet[2667]: E0905 06:02:34.811964 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.812109 kubelet[2667]: E0905 06:02:34.812092 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.812109 kubelet[2667]: W0905 06:02:34.812106 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.812185 kubelet[2667]: E0905 06:02:34.812172 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.812586 kubelet[2667]: E0905 06:02:34.812465 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.812586 kubelet[2667]: W0905 06:02:34.812480 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.812586 kubelet[2667]: E0905 06:02:34.812497 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.813425 kubelet[2667]: E0905 06:02:34.813363 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.813425 kubelet[2667]: W0905 06:02:34.813380 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.813425 kubelet[2667]: E0905 06:02:34.813414 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.813527 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.814335 kubelet[2667]: W0905 06:02:34.813539 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.813575 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.813760 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.814335 kubelet[2667]: W0905 06:02:34.813769 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.813818 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.813891 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.814335 kubelet[2667]: W0905 06:02:34.813898 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.813951 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.814335 kubelet[2667]: E0905 06:02:34.814047 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.814575 kubelet[2667]: W0905 06:02:34.814055 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.814575 kubelet[2667]: E0905 06:02:34.814088 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.814575 kubelet[2667]: E0905 06:02:34.814164 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.814575 kubelet[2667]: W0905 06:02:34.814171 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.814575 kubelet[2667]: E0905 06:02:34.814180 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.814575 kubelet[2667]: E0905 06:02:34.814398 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.814575 kubelet[2667]: W0905 06:02:34.814408 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.814575 kubelet[2667]: E0905 06:02:34.814418 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.815259 kubelet[2667]: E0905 06:02:34.815192 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.815259 kubelet[2667]: W0905 06:02:34.815250 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.815259 kubelet[2667]: E0905 06:02:34.815267 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.816462 kubelet[2667]: E0905 06:02:34.816407 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.816613 kubelet[2667]: W0905 06:02:34.816423 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.816613 kubelet[2667]: E0905 06:02:34.816553 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.816898 kubelet[2667]: E0905 06:02:34.816884 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.816952 kubelet[2667]: W0905 06:02:34.816941 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.817099 kubelet[2667]: E0905 06:02:34.817006 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.817634 kubelet[2667]: E0905 06:02:34.817616 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.818086 kubelet[2667]: W0905 06:02:34.818066 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.818186 kubelet[2667]: E0905 06:02:34.818174 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.818431 kubelet[2667]: E0905 06:02:34.818413 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.818431 kubelet[2667]: W0905 06:02:34.818430 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.818498 kubelet[2667]: E0905 06:02:34.818448 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.818756 kubelet[2667]: E0905 06:02:34.818741 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.818790 kubelet[2667]: W0905 06:02:34.818755 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.818790 kubelet[2667]: E0905 06:02:34.818772 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.818960 kubelet[2667]: E0905 06:02:34.818947 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.818960 kubelet[2667]: W0905 06:02:34.818959 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.819006 kubelet[2667]: E0905 06:02:34.818969 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.827245 kubelet[2667]: E0905 06:02:34.826577 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.827245 kubelet[2667]: W0905 06:02:34.826595 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.827245 kubelet[2667]: E0905 06:02:34.826609 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:34.914685 kubelet[2667]: E0905 06:02:34.914600 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:34.914685 kubelet[2667]: W0905 06:02:34.914622 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:34.914685 kubelet[2667]: E0905 06:02:34.914641 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.016145 kubelet[2667]: E0905 06:02:35.016038 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:35.016145 kubelet[2667]: W0905 06:02:35.016062 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:35.016145 kubelet[2667]: E0905 06:02:35.016082 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.117436 kubelet[2667]: E0905 06:02:35.117330 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:35.117436 kubelet[2667]: W0905 06:02:35.117354 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:35.117436 kubelet[2667]: E0905 06:02:35.117372 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.218866 kubelet[2667]: E0905 06:02:35.218824 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:35.218866 kubelet[2667]: W0905 06:02:35.218845 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:35.218866 kubelet[2667]: E0905 06:02:35.218864 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.320220 kubelet[2667]: E0905 06:02:35.320171 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:35.320220 kubelet[2667]: W0905 06:02:35.320217 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:35.320587 kubelet[2667]: E0905 06:02:35.320236 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.421522 kubelet[2667]: E0905 06:02:35.421495 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:35.421522 kubelet[2667]: W0905 06:02:35.421517 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:35.421651 kubelet[2667]: E0905 06:02:35.421534 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.469786 kubelet[2667]: E0905 06:02:35.469712 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:35.469786 kubelet[2667]: W0905 06:02:35.469730 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:35.469786 kubelet[2667]: E0905 06:02:35.469745 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:35.572621 containerd[1540]: time="2025-09-05T06:02:35.572572817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sljdn,Uid:ebc2629c-ffd3-4431-a6e1-674bbb102e98,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:35.607304 containerd[1540]: time="2025-09-05T06:02:35.607164666Z" level=info msg="connecting to shim 20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f" address="unix:///run/containerd/s/3c8a351ef5ea2a1a1669b21c10480c0820bb9a51a702dfa7144bfe892274b594" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:35.641352 systemd[1]: Started cri-containerd-20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f.scope - libcontainer container 20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f. Sep 5 06:02:35.747607 containerd[1540]: time="2025-09-05T06:02:35.747518586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sljdn,Uid:ebc2629c-ffd3-4431-a6e1-674bbb102e98,Namespace:calico-system,Attempt:0,} returns sandbox id \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\"" Sep 5 06:02:35.887345 kubelet[2667]: E0905 06:02:35.887300 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46zb7" podUID="cf96125f-d73a-4df4-8958-6a721d1e7275" Sep 5 06:02:36.057886 containerd[1540]: time="2025-09-05T06:02:36.057788334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:36.058516 containerd[1540]: time="2025-09-05T06:02:36.058477656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 06:02:36.059453 containerd[1540]: time="2025-09-05T06:02:36.059271978Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:36.060856 containerd[1540]: time="2025-09-05T06:02:36.060825582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:36.061916 containerd[1540]: time="2025-09-05T06:02:36.061888304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.373601413s" Sep 5 06:02:36.062021 containerd[1540]: time="2025-09-05T06:02:36.061986585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 06:02:36.062849 containerd[1540]: time="2025-09-05T06:02:36.062815427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 06:02:36.081265 containerd[1540]: time="2025-09-05T06:02:36.080907230Z" level=info msg="CreateContainer within sandbox \"94bbe279a95131239f78af851292e71d8455fd90e21d3c9839a3cff37d928c58\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 06:02:36.086436 containerd[1540]: time="2025-09-05T06:02:36.086408403Z" level=info msg="Container 5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:36.092768 containerd[1540]: time="2025-09-05T06:02:36.092705659Z" level=info msg="CreateContainer within sandbox \"94bbe279a95131239f78af851292e71d8455fd90e21d3c9839a3cff37d928c58\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d\"" Sep 5 06:02:36.094840 containerd[1540]: time="2025-09-05T06:02:36.093303620Z" level=info msg="StartContainer for \"5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d\"" Sep 5 06:02:36.094840 containerd[1540]: time="2025-09-05T06:02:36.094333863Z" level=info msg="connecting to shim 5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d" address="unix:///run/containerd/s/7b24842a91f9da69f5548d33401db48ba527065f07a8e8a57490ed88081b7f60" protocol=ttrpc version=3 Sep 5 06:02:36.122378 systemd[1]: Started cri-containerd-5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d.scope - libcontainer container 5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d. Sep 5 06:02:36.160372 containerd[1540]: time="2025-09-05T06:02:36.160334422Z" level=info msg="StartContainer for \"5ef2aa45a0bac983dae7a36efc53f2a5d3c2bb543daa7f438a65a3a66d6a484d\" returns successfully" Sep 5 06:02:36.399905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642481975.mount: Deactivated successfully. Sep 5 06:02:36.964231 kubelet[2667]: I0905 06:02:36.964125 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-558c77b744-f27j9" podStartSLOduration=1.584713169 podStartE2EDuration="2.964108838s" podCreationTimestamp="2025-09-05 06:02:34 +0000 UTC" firstStartedPulling="2025-09-05 06:02:34.683298677 +0000 UTC m=+20.884302434" lastFinishedPulling="2025-09-05 06:02:36.062694306 +0000 UTC m=+22.263698103" observedRunningTime="2025-09-05 06:02:36.963565356 +0000 UTC m=+23.164569153" watchObservedRunningTime="2025-09-05 06:02:36.964108838 +0000 UTC m=+23.165112595" Sep 5 06:02:37.010863 kubelet[2667]: E0905 06:02:37.010832 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.010863 kubelet[2667]: W0905 06:02:37.010855 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.010863 kubelet[2667]: E0905 06:02:37.010871 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.011132 kubelet[2667]: E0905 06:02:37.011119 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.011132 kubelet[2667]: W0905 06:02:37.011130 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.011183 kubelet[2667]: E0905 06:02:37.011138 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.011328 kubelet[2667]: E0905 06:02:37.011312 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.011328 kubelet[2667]: W0905 06:02:37.011326 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.011374 kubelet[2667]: E0905 06:02:37.011334 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.011553 kubelet[2667]: E0905 06:02:37.011540 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.011553 kubelet[2667]: W0905 06:02:37.011553 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.011597 kubelet[2667]: E0905 06:02:37.011561 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.011715 kubelet[2667]: E0905 06:02:37.011700 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.011715 kubelet[2667]: W0905 06:02:37.011712 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.011759 kubelet[2667]: E0905 06:02:37.011719 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.011867 kubelet[2667]: E0905 06:02:37.011854 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.011867 kubelet[2667]: W0905 06:02:37.011865 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.011916 kubelet[2667]: E0905 06:02:37.011873 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.012021 kubelet[2667]: E0905 06:02:37.012005 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.012021 kubelet[2667]: W0905 06:02:37.012017 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.012070 kubelet[2667]: E0905 06:02:37.012023 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.012157 kubelet[2667]: E0905 06:02:37.012143 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.012157 kubelet[2667]: W0905 06:02:37.012154 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.012229 kubelet[2667]: E0905 06:02:37.012161 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.012425 kubelet[2667]: E0905 06:02:37.012408 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.012425 kubelet[2667]: W0905 06:02:37.012424 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.012473 kubelet[2667]: E0905 06:02:37.012434 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.012691 kubelet[2667]: E0905 06:02:37.012679 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.012691 kubelet[2667]: W0905 06:02:37.012691 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.012734 kubelet[2667]: E0905 06:02:37.012698 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.012853 kubelet[2667]: E0905 06:02:37.012841 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.012853 kubelet[2667]: W0905 06:02:37.012851 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.012895 kubelet[2667]: E0905 06:02:37.012858 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.013102 kubelet[2667]: E0905 06:02:37.013091 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.013102 kubelet[2667]: W0905 06:02:37.013102 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.013146 kubelet[2667]: E0905 06:02:37.013109 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.013441 kubelet[2667]: E0905 06:02:37.013427 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.013441 kubelet[2667]: W0905 06:02:37.013440 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.013543 kubelet[2667]: E0905 06:02:37.013448 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.013653 kubelet[2667]: E0905 06:02:37.013641 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.013653 kubelet[2667]: W0905 06:02:37.013652 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.013697 kubelet[2667]: E0905 06:02:37.013660 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.013807 kubelet[2667]: E0905 06:02:37.013794 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.013807 kubelet[2667]: W0905 06:02:37.013804 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.013848 kubelet[2667]: E0905 06:02:37.013812 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.032251 kubelet[2667]: E0905 06:02:37.032232 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.032251 kubelet[2667]: W0905 06:02:37.032248 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.032251 kubelet[2667]: E0905 06:02:37.032259 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.032446 kubelet[2667]: E0905 06:02:37.032431 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.032446 kubelet[2667]: W0905 06:02:37.032443 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.032446 kubelet[2667]: E0905 06:02:37.032456 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.032710 kubelet[2667]: E0905 06:02:37.032697 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.032710 kubelet[2667]: W0905 06:02:37.032710 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.032755 kubelet[2667]: E0905 06:02:37.032723 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.032973 kubelet[2667]: E0905 06:02:37.032960 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.032973 kubelet[2667]: W0905 06:02:37.032971 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.033018 kubelet[2667]: E0905 06:02:37.032983 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.033134 kubelet[2667]: E0905 06:02:37.033121 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.033134 kubelet[2667]: W0905 06:02:37.033132 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.033181 kubelet[2667]: E0905 06:02:37.033144 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.033344 kubelet[2667]: E0905 06:02:37.033324 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.033344 kubelet[2667]: W0905 06:02:37.033336 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.033398 kubelet[2667]: E0905 06:02:37.033353 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.033731 kubelet[2667]: E0905 06:02:37.033716 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.033731 kubelet[2667]: W0905 06:02:37.033729 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.033777 kubelet[2667]: E0905 06:02:37.033743 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.033917 kubelet[2667]: E0905 06:02:37.033904 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.033917 kubelet[2667]: W0905 06:02:37.033915 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.033980 kubelet[2667]: E0905 06:02:37.033960 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.034105 kubelet[2667]: E0905 06:02:37.034090 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.034105 kubelet[2667]: W0905 06:02:37.034102 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.034154 kubelet[2667]: E0905 06:02:37.034123 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.034264 kubelet[2667]: E0905 06:02:37.034252 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.034264 kubelet[2667]: W0905 06:02:37.034263 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.034317 kubelet[2667]: E0905 06:02:37.034280 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.034428 kubelet[2667]: E0905 06:02:37.034416 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.034428 kubelet[2667]: W0905 06:02:37.034427 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.034469 kubelet[2667]: E0905 06:02:37.034442 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.034580 kubelet[2667]: E0905 06:02:37.034569 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.034607 kubelet[2667]: W0905 06:02:37.034580 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.034607 kubelet[2667]: E0905 06:02:37.034591 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.034742 kubelet[2667]: E0905 06:02:37.034730 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.034742 kubelet[2667]: W0905 06:02:37.034741 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.034782 kubelet[2667]: E0905 06:02:37.034755 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.034939 kubelet[2667]: E0905 06:02:37.034926 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.034939 kubelet[2667]: W0905 06:02:37.034938 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.034981 kubelet[2667]: E0905 06:02:37.034947 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.035111 kubelet[2667]: E0905 06:02:37.035100 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.035111 kubelet[2667]: W0905 06:02:37.035110 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.035152 kubelet[2667]: E0905 06:02:37.035125 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.035322 kubelet[2667]: E0905 06:02:37.035308 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.035322 kubelet[2667]: W0905 06:02:37.035320 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.035366 kubelet[2667]: E0905 06:02:37.035339 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.035603 kubelet[2667]: E0905 06:02:37.035591 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.035603 kubelet[2667]: W0905 06:02:37.035602 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.035645 kubelet[2667]: E0905 06:02:37.035614 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.035762 kubelet[2667]: E0905 06:02:37.035748 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:02:37.035762 kubelet[2667]: W0905 06:02:37.035760 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:02:37.035802 kubelet[2667]: E0905 06:02:37.035770 2667 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:02:37.228055 containerd[1540]: time="2025-09-05T06:02:37.227954519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:37.231157 containerd[1540]: time="2025-09-05T06:02:37.231049446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 06:02:37.232137 containerd[1540]: time="2025-09-05T06:02:37.232106729Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:37.234253 containerd[1540]: time="2025-09-05T06:02:37.234214453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:37.234912 containerd[1540]: time="2025-09-05T06:02:37.234883415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.172034468s" Sep 5 06:02:37.234912 containerd[1540]: time="2025-09-05T06:02:37.234908895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 06:02:37.238085 containerd[1540]: time="2025-09-05T06:02:37.238058342Z" level=info msg="CreateContainer within sandbox \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 06:02:37.249049 containerd[1540]: time="2025-09-05T06:02:37.246685521Z" level=info msg="Container edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:37.256787 containerd[1540]: time="2025-09-05T06:02:37.256748224Z" level=info msg="CreateContainer within sandbox \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\"" Sep 5 06:02:37.257389 containerd[1540]: time="2025-09-05T06:02:37.257228345Z" level=info msg="StartContainer for \"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\"" Sep 5 06:02:37.258728 containerd[1540]: time="2025-09-05T06:02:37.258693749Z" level=info msg="connecting to shim edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d" address="unix:///run/containerd/s/3c8a351ef5ea2a1a1669b21c10480c0820bb9a51a702dfa7144bfe892274b594" protocol=ttrpc version=3 Sep 5 06:02:37.294380 systemd[1]: Started cri-containerd-edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d.scope - libcontainer container edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d. Sep 5 06:02:37.342141 containerd[1540]: time="2025-09-05T06:02:37.341850816Z" level=info msg="StartContainer for \"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\" returns successfully" Sep 5 06:02:37.343103 systemd[1]: cri-containerd-edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d.scope: Deactivated successfully. Sep 5 06:02:37.363540 containerd[1540]: time="2025-09-05T06:02:37.363473785Z" level=info msg="received exit event container_id:\"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\" id:\"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\" pid:3351 exited_at:{seconds:1757052157 nanos:359288456}" Sep 5 06:02:37.363540 containerd[1540]: time="2025-09-05T06:02:37.363506105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\" id:\"edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d\" pid:3351 exited_at:{seconds:1757052157 nanos:359288456}" Sep 5 06:02:37.404279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edce270561189639eb677569c8eefde3d1b23b4d87714222257635e52c19f60d-rootfs.mount: Deactivated successfully. Sep 5 06:02:37.886408 kubelet[2667]: E0905 06:02:37.886366 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46zb7" podUID="cf96125f-d73a-4df4-8958-6a721d1e7275" Sep 5 06:02:37.967992 kubelet[2667]: I0905 06:02:37.967844 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:02:37.969304 containerd[1540]: time="2025-09-05T06:02:37.968999873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 06:02:39.887207 kubelet[2667]: E0905 06:02:39.886180 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46zb7" podUID="cf96125f-d73a-4df4-8958-6a721d1e7275" Sep 5 06:02:40.906029 containerd[1540]: time="2025-09-05T06:02:40.905987491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:40.906853 containerd[1540]: time="2025-09-05T06:02:40.906496532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 06:02:40.907284 containerd[1540]: time="2025-09-05T06:02:40.907256934Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:40.909190 containerd[1540]: time="2025-09-05T06:02:40.909159777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:40.910450 containerd[1540]: time="2025-09-05T06:02:40.910426700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.941392266s" Sep 5 06:02:40.910515 containerd[1540]: time="2025-09-05T06:02:40.910453900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 06:02:40.912462 containerd[1540]: time="2025-09-05T06:02:40.912432623Z" level=info msg="CreateContainer within sandbox \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 06:02:40.919651 containerd[1540]: time="2025-09-05T06:02:40.919616397Z" level=info msg="Container d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:40.927561 containerd[1540]: time="2025-09-05T06:02:40.927520531Z" level=info msg="CreateContainer within sandbox \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\"" Sep 5 06:02:40.927993 containerd[1540]: time="2025-09-05T06:02:40.927966772Z" level=info msg="StartContainer for \"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\"" Sep 5 06:02:40.929343 containerd[1540]: time="2025-09-05T06:02:40.929318655Z" level=info msg="connecting to shim d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045" address="unix:///run/containerd/s/3c8a351ef5ea2a1a1669b21c10480c0820bb9a51a702dfa7144bfe892274b594" protocol=ttrpc version=3 Sep 5 06:02:40.946337 systemd[1]: Started cri-containerd-d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045.scope - libcontainer container d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045. Sep 5 06:02:40.976094 containerd[1540]: time="2025-09-05T06:02:40.976061022Z" level=info msg="StartContainer for \"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\" returns successfully" Sep 5 06:02:41.507229 systemd[1]: cri-containerd-d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045.scope: Deactivated successfully. Sep 5 06:02:41.507914 systemd[1]: cri-containerd-d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045.scope: Consumed 433ms CPU time, 175.2M memory peak, 2.8M read from disk, 165.8M written to disk. Sep 5 06:02:41.518317 containerd[1540]: time="2025-09-05T06:02:41.518269690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\" id:\"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\" pid:3411 exited_at:{seconds:1757052161 nanos:517860330}" Sep 5 06:02:41.519137 containerd[1540]: time="2025-09-05T06:02:41.518512051Z" level=info msg="received exit event container_id:\"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\" id:\"d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045\" pid:3411 exited_at:{seconds:1757052161 nanos:517860330}" Sep 5 06:02:41.535611 kubelet[2667]: I0905 06:02:41.535564 2667 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 06:02:41.540878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a08d27a849669917e0320457ce15215f7228a92fd369ddb9bf78610a28f045-rootfs.mount: Deactivated successfully. Sep 5 06:02:41.584717 systemd[1]: Created slice kubepods-burstable-pod7ef244c2_37ee_4de1_9971_98c536e0997f.slice - libcontainer container kubepods-burstable-pod7ef244c2_37ee_4de1_9971_98c536e0997f.slice. Sep 5 06:02:41.624789 systemd[1]: Created slice kubepods-besteffort-pod4da8305a_e7a9_4d0c_8e11_a4992d816014.slice - libcontainer container kubepods-besteffort-pod4da8305a_e7a9_4d0c_8e11_a4992d816014.slice. Sep 5 06:02:41.631129 systemd[1]: Created slice kubepods-besteffort-podeef48bcf_2bf3_4c04_9956_6120276d01a9.slice - libcontainer container kubepods-besteffort-podeef48bcf_2bf3_4c04_9956_6120276d01a9.slice. Sep 5 06:02:41.636313 systemd[1]: Created slice kubepods-besteffort-pode019660a_77f2_4342_88a6_1b159fdd9044.slice - libcontainer container kubepods-besteffort-pode019660a_77f2_4342_88a6_1b159fdd9044.slice. Sep 5 06:02:41.641030 systemd[1]: Created slice kubepods-burstable-pod8693afd1_cf6f_4d9b_a24f_ad4d05639b9c.slice - libcontainer container kubepods-burstable-pod8693afd1_cf6f_4d9b_a24f_ad4d05639b9c.slice. Sep 5 06:02:41.644984 systemd[1]: Created slice kubepods-besteffort-poda07d03b6_b51b_4713_93e9_707285ad93b5.slice - libcontainer container kubepods-besteffort-poda07d03b6_b51b_4713_93e9_707285ad93b5.slice. Sep 5 06:02:41.648915 systemd[1]: Created slice kubepods-besteffort-podee39782f_381e_4ec7_a3ce_308afbb7868d.slice - libcontainer container kubepods-besteffort-podee39782f_381e_4ec7_a3ce_308afbb7868d.slice. Sep 5 06:02:41.664663 kubelet[2667]: I0905 06:02:41.664627 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjgt\" (UniqueName: \"kubernetes.io/projected/4da8305a-e7a9-4d0c-8e11-a4992d816014-kube-api-access-pgjgt\") pod \"calico-kube-controllers-9c78c446b-p45vd\" (UID: \"4da8305a-e7a9-4d0c-8e11-a4992d816014\") " pod="calico-system/calico-kube-controllers-9c78c446b-p45vd" Sep 5 06:02:41.664949 kubelet[2667]: I0905 06:02:41.664828 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkw66\" (UniqueName: \"kubernetes.io/projected/7ef244c2-37ee-4de1-9971-98c536e0997f-kube-api-access-hkw66\") pod \"coredns-7c65d6cfc9-r2q67\" (UID: \"7ef244c2-37ee-4de1-9971-98c536e0997f\") " pod="kube-system/coredns-7c65d6cfc9-r2q67" Sep 5 06:02:41.664949 kubelet[2667]: I0905 06:02:41.664864 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdscf\" (UniqueName: \"kubernetes.io/projected/8693afd1-cf6f-4d9b-a24f-ad4d05639b9c-kube-api-access-bdscf\") pod \"coredns-7c65d6cfc9-bc62q\" (UID: \"8693afd1-cf6f-4d9b-a24f-ad4d05639b9c\") " pod="kube-system/coredns-7c65d6cfc9-bc62q" Sep 5 06:02:41.664949 kubelet[2667]: I0905 06:02:41.664881 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5d6\" (UniqueName: \"kubernetes.io/projected/eef48bcf-2bf3-4c04-9956-6120276d01a9-kube-api-access-px5d6\") pod \"calico-apiserver-55c67bb578-k765k\" (UID: \"eef48bcf-2bf3-4c04-9956-6120276d01a9\") " pod="calico-apiserver/calico-apiserver-55c67bb578-k765k" Sep 5 06:02:41.664949 kubelet[2667]: I0905 06:02:41.664905 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2ntr\" (UniqueName: \"kubernetes.io/projected/ee39782f-381e-4ec7-a3ce-308afbb7868d-kube-api-access-p2ntr\") pod \"goldmane-7988f88666-krsv8\" (UID: \"ee39782f-381e-4ec7-a3ce-308afbb7868d\") " pod="calico-system/goldmane-7988f88666-krsv8" Sep 5 06:02:41.665071 kubelet[2667]: I0905 06:02:41.664964 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dj5k\" (UniqueName: \"kubernetes.io/projected/e019660a-77f2-4342-88a6-1b159fdd9044-kube-api-access-7dj5k\") pod \"calico-apiserver-55c67bb578-ft6sk\" (UID: \"e019660a-77f2-4342-88a6-1b159fdd9044\") " pod="calico-apiserver/calico-apiserver-55c67bb578-ft6sk" Sep 5 06:02:41.665071 kubelet[2667]: I0905 06:02:41.664986 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-backend-key-pair\") pod \"whisker-d55d6948b-f7cfq\" (UID: \"a07d03b6-b51b-4713-93e9-707285ad93b5\") " pod="calico-system/whisker-d55d6948b-f7cfq" Sep 5 06:02:41.665071 kubelet[2667]: I0905 06:02:41.665002 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9btd\" (UniqueName: \"kubernetes.io/projected/a07d03b6-b51b-4713-93e9-707285ad93b5-kube-api-access-n9btd\") pod \"whisker-d55d6948b-f7cfq\" (UID: \"a07d03b6-b51b-4713-93e9-707285ad93b5\") " pod="calico-system/whisker-d55d6948b-f7cfq" Sep 5 06:02:41.665071 kubelet[2667]: I0905 06:02:41.665019 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee39782f-381e-4ec7-a3ce-308afbb7868d-config\") pod \"goldmane-7988f88666-krsv8\" (UID: \"ee39782f-381e-4ec7-a3ce-308afbb7868d\") " pod="calico-system/goldmane-7988f88666-krsv8" Sep 5 06:02:41.665071 kubelet[2667]: I0905 06:02:41.665034 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eef48bcf-2bf3-4c04-9956-6120276d01a9-calico-apiserver-certs\") pod \"calico-apiserver-55c67bb578-k765k\" (UID: \"eef48bcf-2bf3-4c04-9956-6120276d01a9\") " pod="calico-apiserver/calico-apiserver-55c67bb578-k765k" Sep 5 06:02:41.665258 kubelet[2667]: I0905 06:02:41.665051 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da8305a-e7a9-4d0c-8e11-a4992d816014-tigera-ca-bundle\") pod \"calico-kube-controllers-9c78c446b-p45vd\" (UID: \"4da8305a-e7a9-4d0c-8e11-a4992d816014\") " pod="calico-system/calico-kube-controllers-9c78c446b-p45vd" Sep 5 06:02:41.665258 kubelet[2667]: I0905 06:02:41.665068 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-ca-bundle\") pod \"whisker-d55d6948b-f7cfq\" (UID: \"a07d03b6-b51b-4713-93e9-707285ad93b5\") " pod="calico-system/whisker-d55d6948b-f7cfq" Sep 5 06:02:41.665258 kubelet[2667]: I0905 06:02:41.665085 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee39782f-381e-4ec7-a3ce-308afbb7868d-goldmane-ca-bundle\") pod \"goldmane-7988f88666-krsv8\" (UID: \"ee39782f-381e-4ec7-a3ce-308afbb7868d\") " pod="calico-system/goldmane-7988f88666-krsv8" Sep 5 06:02:41.665258 kubelet[2667]: I0905 06:02:41.665121 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ee39782f-381e-4ec7-a3ce-308afbb7868d-goldmane-key-pair\") pod \"goldmane-7988f88666-krsv8\" (UID: \"ee39782f-381e-4ec7-a3ce-308afbb7868d\") " pod="calico-system/goldmane-7988f88666-krsv8" Sep 5 06:02:41.665258 kubelet[2667]: I0905 06:02:41.665135 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef244c2-37ee-4de1-9971-98c536e0997f-config-volume\") pod \"coredns-7c65d6cfc9-r2q67\" (UID: \"7ef244c2-37ee-4de1-9971-98c536e0997f\") " pod="kube-system/coredns-7c65d6cfc9-r2q67" Sep 5 06:02:41.665534 kubelet[2667]: I0905 06:02:41.665153 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8693afd1-cf6f-4d9b-a24f-ad4d05639b9c-config-volume\") pod \"coredns-7c65d6cfc9-bc62q\" (UID: \"8693afd1-cf6f-4d9b-a24f-ad4d05639b9c\") " pod="kube-system/coredns-7c65d6cfc9-bc62q" Sep 5 06:02:41.665534 kubelet[2667]: I0905 06:02:41.665172 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e019660a-77f2-4342-88a6-1b159fdd9044-calico-apiserver-certs\") pod \"calico-apiserver-55c67bb578-ft6sk\" (UID: \"e019660a-77f2-4342-88a6-1b159fdd9044\") " pod="calico-apiserver/calico-apiserver-55c67bb578-ft6sk" Sep 5 06:02:41.891044 systemd[1]: Created slice kubepods-besteffort-podcf96125f_d73a_4df4_8958_6a721d1e7275.slice - libcontainer container kubepods-besteffort-podcf96125f_d73a_4df4_8958_6a721d1e7275.slice. Sep 5 06:02:41.893704 containerd[1540]: time="2025-09-05T06:02:41.893675385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46zb7,Uid:cf96125f-d73a-4df4-8958-6a721d1e7275,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:41.910490 containerd[1540]: time="2025-09-05T06:02:41.910332974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2q67,Uid:7ef244c2-37ee-4de1-9971-98c536e0997f,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:41.932391 containerd[1540]: time="2025-09-05T06:02:41.932218853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c78c446b-p45vd,Uid:4da8305a-e7a9-4d0c-8e11-a4992d816014,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:41.934455 containerd[1540]: time="2025-09-05T06:02:41.934417896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-k765k,Uid:eef48bcf-2bf3-4c04-9956-6120276d01a9,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:02:41.939113 containerd[1540]: time="2025-09-05T06:02:41.939060265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-ft6sk,Uid:e019660a-77f2-4342-88a6-1b159fdd9044,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:02:41.946803 containerd[1540]: time="2025-09-05T06:02:41.946766078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bc62q,Uid:8693afd1-cf6f-4d9b-a24f-ad4d05639b9c,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:41.958116 containerd[1540]: time="2025-09-05T06:02:41.958071698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-krsv8,Uid:ee39782f-381e-4ec7-a3ce-308afbb7868d,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:41.958385 containerd[1540]: time="2025-09-05T06:02:41.958355098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d55d6948b-f7cfq,Uid:a07d03b6-b51b-4713-93e9-707285ad93b5,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:42.008927 containerd[1540]: time="2025-09-05T06:02:42.008879706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 06:02:42.031026 containerd[1540]: time="2025-09-05T06:02:42.030953942Z" level=error msg="Failed to destroy network for sandbox \"1ead6ca23c201ed71b395574495aca3d2b79e44c8bcd634284509123a58dd2e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.034329 containerd[1540]: time="2025-09-05T06:02:42.034205267Z" level=error msg="Failed to destroy network for sandbox \"44276505fd784d0922c4a11259e81d6507f65b81f8579c2914a763a8979f87c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.035977 containerd[1540]: time="2025-09-05T06:02:42.035929350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46zb7,Uid:cf96125f-d73a-4df4-8958-6a721d1e7275,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ead6ca23c201ed71b395574495aca3d2b79e44c8bcd634284509123a58dd2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.036679 containerd[1540]: time="2025-09-05T06:02:42.036637991Z" level=error msg="Failed to destroy network for sandbox \"d0cd7abf16acfa4e57d9dccd045ea43f2f6077fe388c40e6a65398ef691a6fa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.037225 containerd[1540]: time="2025-09-05T06:02:42.036920111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c78c446b-p45vd,Uid:4da8305a-e7a9-4d0c-8e11-a4992d816014,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44276505fd784d0922c4a11259e81d6507f65b81f8579c2914a763a8979f87c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.038971 kubelet[2667]: E0905 06:02:42.038367 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44276505fd784d0922c4a11259e81d6507f65b81f8579c2914a763a8979f87c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.038971 kubelet[2667]: E0905 06:02:42.038451 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44276505fd784d0922c4a11259e81d6507f65b81f8579c2914a763a8979f87c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9c78c446b-p45vd" Sep 5 06:02:42.038971 kubelet[2667]: E0905 06:02:42.038846 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ead6ca23c201ed71b395574495aca3d2b79e44c8bcd634284509123a58dd2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.038971 kubelet[2667]: E0905 06:02:42.038900 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ead6ca23c201ed71b395574495aca3d2b79e44c8bcd634284509123a58dd2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:42.041599 containerd[1540]: time="2025-09-05T06:02:42.041560559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2q67,Uid:7ef244c2-37ee-4de1-9971-98c536e0997f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cd7abf16acfa4e57d9dccd045ea43f2f6077fe388c40e6a65398ef691a6fa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.041807 kubelet[2667]: E0905 06:02:42.041735 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cd7abf16acfa4e57d9dccd045ea43f2f6077fe388c40e6a65398ef691a6fa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.041917 kubelet[2667]: E0905 06:02:42.041823 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cd7abf16acfa4e57d9dccd045ea43f2f6077fe388c40e6a65398ef691a6fa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r2q67" Sep 5 06:02:42.041979 kubelet[2667]: E0905 06:02:42.041958 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cd7abf16acfa4e57d9dccd045ea43f2f6077fe388c40e6a65398ef691a6fa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r2q67" Sep 5 06:02:42.042038 kubelet[2667]: E0905 06:02:42.042013 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r2q67_kube-system(7ef244c2-37ee-4de1-9971-98c536e0997f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r2q67_kube-system(7ef244c2-37ee-4de1-9971-98c536e0997f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0cd7abf16acfa4e57d9dccd045ea43f2f6077fe388c40e6a65398ef691a6fa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r2q67" podUID="7ef244c2-37ee-4de1-9971-98c536e0997f" Sep 5 06:02:42.042253 kubelet[2667]: E0905 06:02:42.042226 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ead6ca23c201ed71b395574495aca3d2b79e44c8bcd634284509123a58dd2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46zb7" Sep 5 06:02:42.042468 kubelet[2667]: E0905 06:02:42.042440 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46zb7_calico-system(cf96125f-d73a-4df4-8958-6a721d1e7275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46zb7_calico-system(cf96125f-d73a-4df4-8958-6a721d1e7275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ead6ca23c201ed71b395574495aca3d2b79e44c8bcd634284509123a58dd2e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46zb7" podUID="cf96125f-d73a-4df4-8958-6a721d1e7275" Sep 5 06:02:42.043511 kubelet[2667]: E0905 06:02:42.043472 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44276505fd784d0922c4a11259e81d6507f65b81f8579c2914a763a8979f87c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9c78c446b-p45vd" Sep 5 06:02:42.043583 kubelet[2667]: E0905 06:02:42.043538 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9c78c446b-p45vd_calico-system(4da8305a-e7a9-4d0c-8e11-a4992d816014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9c78c446b-p45vd_calico-system(4da8305a-e7a9-4d0c-8e11-a4992d816014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44276505fd784d0922c4a11259e81d6507f65b81f8579c2914a763a8979f87c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9c78c446b-p45vd" podUID="4da8305a-e7a9-4d0c-8e11-a4992d816014" Sep 5 06:02:42.069360 containerd[1540]: time="2025-09-05T06:02:42.069300284Z" level=error msg="Failed to destroy network for sandbox \"f0075b1df07de77b416c93ffb10550dd5862d10e47799d303db9ae952462cd2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.071499 containerd[1540]: time="2025-09-05T06:02:42.071323768Z" level=error msg="Failed to destroy network for sandbox \"79756a8497b8957163fe655024d9ef36acad1eff9c833043286b5c2a4b3b34a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.072032 containerd[1540]: time="2025-09-05T06:02:42.071982689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-k765k,Uid:eef48bcf-2bf3-4c04-9956-6120276d01a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0075b1df07de77b416c93ffb10550dd5862d10e47799d303db9ae952462cd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.072752 kubelet[2667]: E0905 06:02:42.072350 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0075b1df07de77b416c93ffb10550dd5862d10e47799d303db9ae952462cd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.072752 kubelet[2667]: E0905 06:02:42.072410 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0075b1df07de77b416c93ffb10550dd5862d10e47799d303db9ae952462cd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c67bb578-k765k" Sep 5 06:02:42.072752 kubelet[2667]: E0905 06:02:42.072437 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0075b1df07de77b416c93ffb10550dd5862d10e47799d303db9ae952462cd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c67bb578-k765k" Sep 5 06:02:42.072922 kubelet[2667]: E0905 06:02:42.072477 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55c67bb578-k765k_calico-apiserver(eef48bcf-2bf3-4c04-9956-6120276d01a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55c67bb578-k765k_calico-apiserver(eef48bcf-2bf3-4c04-9956-6120276d01a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0075b1df07de77b416c93ffb10550dd5862d10e47799d303db9ae952462cd2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55c67bb578-k765k" podUID="eef48bcf-2bf3-4c04-9956-6120276d01a9" Sep 5 06:02:42.073359 containerd[1540]: time="2025-09-05T06:02:42.073309651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bc62q,Uid:8693afd1-cf6f-4d9b-a24f-ad4d05639b9c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79756a8497b8957163fe655024d9ef36acad1eff9c833043286b5c2a4b3b34a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.073502 kubelet[2667]: E0905 06:02:42.073471 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79756a8497b8957163fe655024d9ef36acad1eff9c833043286b5c2a4b3b34a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.074340 kubelet[2667]: E0905 06:02:42.073514 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79756a8497b8957163fe655024d9ef36acad1eff9c833043286b5c2a4b3b34a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bc62q" Sep 5 06:02:42.074443 kubelet[2667]: E0905 06:02:42.074343 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79756a8497b8957163fe655024d9ef36acad1eff9c833043286b5c2a4b3b34a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bc62q" Sep 5 06:02:42.074499 kubelet[2667]: E0905 06:02:42.074436 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bc62q_kube-system(8693afd1-cf6f-4d9b-a24f-ad4d05639b9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bc62q_kube-system(8693afd1-cf6f-4d9b-a24f-ad4d05639b9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79756a8497b8957163fe655024d9ef36acad1eff9c833043286b5c2a4b3b34a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bc62q" podUID="8693afd1-cf6f-4d9b-a24f-ad4d05639b9c" Sep 5 06:02:42.075975 containerd[1540]: time="2025-09-05T06:02:42.075943975Z" level=error msg="Failed to destroy network for sandbox \"df1aea92675ed311aa8f433b1f8f5c54fcb42dd367b71d6626030ce1e33b81bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.077358 containerd[1540]: time="2025-09-05T06:02:42.077317097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-krsv8,Uid:ee39782f-381e-4ec7-a3ce-308afbb7868d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1aea92675ed311aa8f433b1f8f5c54fcb42dd367b71d6626030ce1e33b81bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.077911 kubelet[2667]: E0905 06:02:42.077875 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1aea92675ed311aa8f433b1f8f5c54fcb42dd367b71d6626030ce1e33b81bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.078022 kubelet[2667]: E0905 06:02:42.078002 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1aea92675ed311aa8f433b1f8f5c54fcb42dd367b71d6626030ce1e33b81bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-krsv8" Sep 5 06:02:42.078085 kubelet[2667]: E0905 06:02:42.078072 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1aea92675ed311aa8f433b1f8f5c54fcb42dd367b71d6626030ce1e33b81bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-krsv8" Sep 5 06:02:42.078187 kubelet[2667]: E0905 06:02:42.078156 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-krsv8_calico-system(ee39782f-381e-4ec7-a3ce-308afbb7868d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-krsv8_calico-system(ee39782f-381e-4ec7-a3ce-308afbb7868d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df1aea92675ed311aa8f433b1f8f5c54fcb42dd367b71d6626030ce1e33b81bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-krsv8" podUID="ee39782f-381e-4ec7-a3ce-308afbb7868d" Sep 5 06:02:42.079414 containerd[1540]: time="2025-09-05T06:02:42.079310381Z" level=error msg="Failed to destroy network for sandbox \"091f231316d874736fadeef1e8b563746423a54a8e3f8bc6adbaed7867e2bf4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.080274 containerd[1540]: time="2025-09-05T06:02:42.080240302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-ft6sk,Uid:e019660a-77f2-4342-88a6-1b159fdd9044,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"091f231316d874736fadeef1e8b563746423a54a8e3f8bc6adbaed7867e2bf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.080538 kubelet[2667]: E0905 06:02:42.080491 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091f231316d874736fadeef1e8b563746423a54a8e3f8bc6adbaed7867e2bf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.080661 kubelet[2667]: E0905 06:02:42.080622 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091f231316d874736fadeef1e8b563746423a54a8e3f8bc6adbaed7867e2bf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c67bb578-ft6sk" Sep 5 06:02:42.080723 kubelet[2667]: E0905 06:02:42.080706 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091f231316d874736fadeef1e8b563746423a54a8e3f8bc6adbaed7867e2bf4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55c67bb578-ft6sk" Sep 5 06:02:42.080823 kubelet[2667]: E0905 06:02:42.080799 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55c67bb578-ft6sk_calico-apiserver(e019660a-77f2-4342-88a6-1b159fdd9044)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55c67bb578-ft6sk_calico-apiserver(e019660a-77f2-4342-88a6-1b159fdd9044)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"091f231316d874736fadeef1e8b563746423a54a8e3f8bc6adbaed7867e2bf4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55c67bb578-ft6sk" podUID="e019660a-77f2-4342-88a6-1b159fdd9044" Sep 5 06:02:42.088034 containerd[1540]: time="2025-09-05T06:02:42.087941915Z" level=error msg="Failed to destroy network for sandbox \"b9eeb68d29aae5b943e670e2e18cb5b2001fd10249654e495778e1446c3d0c84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.089565 containerd[1540]: time="2025-09-05T06:02:42.089533677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d55d6948b-f7cfq,Uid:a07d03b6-b51b-4713-93e9-707285ad93b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9eeb68d29aae5b943e670e2e18cb5b2001fd10249654e495778e1446c3d0c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.090064 kubelet[2667]: E0905 06:02:42.089784 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9eeb68d29aae5b943e670e2e18cb5b2001fd10249654e495778e1446c3d0c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:02:42.090064 kubelet[2667]: E0905 06:02:42.089829 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9eeb68d29aae5b943e670e2e18cb5b2001fd10249654e495778e1446c3d0c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d55d6948b-f7cfq" Sep 5 06:02:42.090064 kubelet[2667]: E0905 06:02:42.089846 2667 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9eeb68d29aae5b943e670e2e18cb5b2001fd10249654e495778e1446c3d0c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d55d6948b-f7cfq" Sep 5 06:02:42.090173 kubelet[2667]: E0905 06:02:42.089895 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d55d6948b-f7cfq_calico-system(a07d03b6-b51b-4713-93e9-707285ad93b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d55d6948b-f7cfq_calico-system(a07d03b6-b51b-4713-93e9-707285ad93b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9eeb68d29aae5b943e670e2e18cb5b2001fd10249654e495778e1446c3d0c84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d55d6948b-f7cfq" podUID="a07d03b6-b51b-4713-93e9-707285ad93b5" Sep 5 06:02:42.461539 kubelet[2667]: I0905 06:02:42.461489 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:02:42.920892 systemd[1]: run-netns-cni\x2d6ce10505\x2d6dd8\x2d6b5d\x2db5e1\x2d9a465bbcceae.mount: Deactivated successfully. Sep 5 06:02:42.920984 systemd[1]: run-netns-cni\x2dc59f8ba8\x2d82e5\x2d0b63\x2da7c4\x2d690efae15f73.mount: Deactivated successfully. Sep 5 06:02:42.921030 systemd[1]: run-netns-cni\x2d86352ab1\x2d6af7\x2d79db\x2d3f1c\x2d4953fb5ed99c.mount: Deactivated successfully. Sep 5 06:02:42.921072 systemd[1]: run-netns-cni\x2dfadf9fa5\x2d4058\x2dc239\x2dd7fa\x2d0cb9d55d79bf.mount: Deactivated successfully. Sep 5 06:02:42.921115 systemd[1]: run-netns-cni\x2dbcbc057f\x2d04ae\x2d3e5b\x2dad26\x2d4ed837248745.mount: Deactivated successfully. Sep 5 06:02:42.921155 systemd[1]: run-netns-cni\x2d5217ba41\x2d90cc\x2d3f2d\x2dc69b\x2d580da5a8e72a.mount: Deactivated successfully. Sep 5 06:02:44.882436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406770766.mount: Deactivated successfully. Sep 5 06:02:45.066677 containerd[1540]: time="2025-09-05T06:02:45.058510097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 06:02:45.066677 containerd[1540]: time="2025-09-05T06:02:45.061471421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.052531195s" Sep 5 06:02:45.067039 containerd[1540]: time="2025-09-05T06:02:45.066702748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 06:02:45.067039 containerd[1540]: time="2025-09-05T06:02:45.065654667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:45.067438 containerd[1540]: time="2025-09-05T06:02:45.067407269Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:45.067963 containerd[1540]: time="2025-09-05T06:02:45.067943270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:45.076121 containerd[1540]: time="2025-09-05T06:02:45.076095441Z" level=info msg="CreateContainer within sandbox \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 06:02:45.083814 containerd[1540]: time="2025-09-05T06:02:45.083783211Z" level=info msg="Container 983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:45.090654 containerd[1540]: time="2025-09-05T06:02:45.090625100Z" level=info msg="CreateContainer within sandbox \"20166fb9ab5ba415cc19056c617607da6478750bcce68ef9122b765f42d9198f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487\"" Sep 5 06:02:45.091275 containerd[1540]: time="2025-09-05T06:02:45.091248421Z" level=info msg="StartContainer for \"983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487\"" Sep 5 06:02:45.092676 containerd[1540]: time="2025-09-05T06:02:45.092652463Z" level=info msg="connecting to shim 983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487" address="unix:///run/containerd/s/3c8a351ef5ea2a1a1669b21c10480c0820bb9a51a702dfa7144bfe892274b594" protocol=ttrpc version=3 Sep 5 06:02:45.115336 systemd[1]: Started cri-containerd-983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487.scope - libcontainer container 983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487. Sep 5 06:02:45.155181 containerd[1540]: time="2025-09-05T06:02:45.155082707Z" level=info msg="StartContainer for \"983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487\" returns successfully" Sep 5 06:02:45.283470 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 06:02:45.283579 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 06:02:45.496621 kubelet[2667]: I0905 06:02:45.496586 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-ca-bundle\") pod \"a07d03b6-b51b-4713-93e9-707285ad93b5\" (UID: \"a07d03b6-b51b-4713-93e9-707285ad93b5\") " Sep 5 06:02:45.496621 kubelet[2667]: I0905 06:02:45.496630 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9btd\" (UniqueName: \"kubernetes.io/projected/a07d03b6-b51b-4713-93e9-707285ad93b5-kube-api-access-n9btd\") pod \"a07d03b6-b51b-4713-93e9-707285ad93b5\" (UID: \"a07d03b6-b51b-4713-93e9-707285ad93b5\") " Sep 5 06:02:45.497899 kubelet[2667]: I0905 06:02:45.496653 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-backend-key-pair\") pod \"a07d03b6-b51b-4713-93e9-707285ad93b5\" (UID: \"a07d03b6-b51b-4713-93e9-707285ad93b5\") " Sep 5 06:02:45.497899 kubelet[2667]: I0905 06:02:45.497568 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a07d03b6-b51b-4713-93e9-707285ad93b5" (UID: "a07d03b6-b51b-4713-93e9-707285ad93b5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 06:02:45.500633 kubelet[2667]: I0905 06:02:45.500601 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a07d03b6-b51b-4713-93e9-707285ad93b5" (UID: "a07d03b6-b51b-4713-93e9-707285ad93b5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 06:02:45.500855 kubelet[2667]: I0905 06:02:45.500826 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07d03b6-b51b-4713-93e9-707285ad93b5-kube-api-access-n9btd" (OuterVolumeSpecName: "kube-api-access-n9btd") pod "a07d03b6-b51b-4713-93e9-707285ad93b5" (UID: "a07d03b6-b51b-4713-93e9-707285ad93b5"). InnerVolumeSpecName "kube-api-access-n9btd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 06:02:45.597741 kubelet[2667]: I0905 06:02:45.597692 2667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9btd\" (UniqueName: \"kubernetes.io/projected/a07d03b6-b51b-4713-93e9-707285ad93b5-kube-api-access-n9btd\") on node \"localhost\" DevicePath \"\"" Sep 5 06:02:45.597741 kubelet[2667]: I0905 06:02:45.597723 2667 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 06:02:45.597741 kubelet[2667]: I0905 06:02:45.597733 2667 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a07d03b6-b51b-4713-93e9-707285ad93b5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 06:02:45.882510 systemd[1]: var-lib-kubelet-pods-a07d03b6\x2db51b\x2d4713\x2d93e9\x2d707285ad93b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn9btd.mount: Deactivated successfully. Sep 5 06:02:45.882610 systemd[1]: var-lib-kubelet-pods-a07d03b6\x2db51b\x2d4713\x2d93e9\x2d707285ad93b5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 06:02:45.897276 systemd[1]: Removed slice kubepods-besteffort-poda07d03b6_b51b_4713_93e9_707285ad93b5.slice - libcontainer container kubepods-besteffort-poda07d03b6_b51b_4713_93e9_707285ad93b5.slice. Sep 5 06:02:46.033220 kubelet[2667]: I0905 06:02:46.032879 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sljdn" podStartSLOduration=2.713817728 podStartE2EDuration="12.032863248s" podCreationTimestamp="2025-09-05 06:02:34 +0000 UTC" firstStartedPulling="2025-09-05 06:02:35.74909339 +0000 UTC m=+21.950097187" lastFinishedPulling="2025-09-05 06:02:45.06813891 +0000 UTC m=+31.269142707" observedRunningTime="2025-09-05 06:02:46.031649126 +0000 UTC m=+32.232652923" watchObservedRunningTime="2025-09-05 06:02:46.032863248 +0000 UTC m=+32.233867045" Sep 5 06:02:46.084019 systemd[1]: Created slice kubepods-besteffort-pod2d0695d0_0517_4335_b37c_72afab4dc780.slice - libcontainer container kubepods-besteffort-pod2d0695d0_0517_4335_b37c_72afab4dc780.slice. Sep 5 06:02:46.100604 kubelet[2667]: I0905 06:02:46.100563 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd49\" (UniqueName: \"kubernetes.io/projected/2d0695d0-0517-4335-b37c-72afab4dc780-kube-api-access-dnd49\") pod \"whisker-756c6d5ddf-q4w88\" (UID: \"2d0695d0-0517-4335-b37c-72afab4dc780\") " pod="calico-system/whisker-756c6d5ddf-q4w88" Sep 5 06:02:46.100731 kubelet[2667]: I0905 06:02:46.100629 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d0695d0-0517-4335-b37c-72afab4dc780-whisker-backend-key-pair\") pod \"whisker-756c6d5ddf-q4w88\" (UID: \"2d0695d0-0517-4335-b37c-72afab4dc780\") " pod="calico-system/whisker-756c6d5ddf-q4w88" Sep 5 06:02:46.100731 kubelet[2667]: I0905 06:02:46.100648 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d0695d0-0517-4335-b37c-72afab4dc780-whisker-ca-bundle\") pod \"whisker-756c6d5ddf-q4w88\" (UID: \"2d0695d0-0517-4335-b37c-72afab4dc780\") " pod="calico-system/whisker-756c6d5ddf-q4w88" Sep 5 06:02:46.388122 containerd[1540]: time="2025-09-05T06:02:46.388071697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756c6d5ddf-q4w88,Uid:2d0695d0-0517-4335-b37c-72afab4dc780,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:46.569933 systemd-networkd[1436]: cali13ef5ce88f9: Link UP Sep 5 06:02:46.570154 systemd-networkd[1436]: cali13ef5ce88f9: Gained carrier Sep 5 06:02:46.590370 containerd[1540]: 2025-09-05 06:02:46.426 [INFO][3791] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:02:46.590370 containerd[1540]: 2025-09-05 06:02:46.456 [INFO][3791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--756c6d5ddf--q4w88-eth0 whisker-756c6d5ddf- calico-system 2d0695d0-0517-4335-b37c-72afab4dc780 851 0 2025-09-05 06:02:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:756c6d5ddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-756c6d5ddf-q4w88 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali13ef5ce88f9 [] [] }} ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-" Sep 5 06:02:46.590370 containerd[1540]: 2025-09-05 06:02:46.456 [INFO][3791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.590370 containerd[1540]: 2025-09-05 06:02:46.517 [INFO][3808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" HandleID="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Workload="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.517 [INFO][3808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" HandleID="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Workload="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-756c6d5ddf-q4w88", "timestamp":"2025-09-05 06:02:46.51769654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.517 [INFO][3808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.517 [INFO][3808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.518 [INFO][3808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.528 [INFO][3808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" host="localhost" Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.533 [INFO][3808] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.537 [INFO][3808] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.539 [INFO][3808] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.541 [INFO][3808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:46.590590 containerd[1540]: 2025-09-05 06:02:46.541 [INFO][3808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" host="localhost" Sep 5 06:02:46.590779 containerd[1540]: 2025-09-05 06:02:46.543 [INFO][3808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa Sep 5 06:02:46.590779 containerd[1540]: 2025-09-05 06:02:46.546 [INFO][3808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" host="localhost" Sep 5 06:02:46.590779 containerd[1540]: 2025-09-05 06:02:46.553 [INFO][3808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" host="localhost" Sep 5 06:02:46.590779 containerd[1540]: 2025-09-05 06:02:46.553 [INFO][3808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" host="localhost" Sep 5 06:02:46.590779 containerd[1540]: 2025-09-05 06:02:46.553 [INFO][3808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:46.590779 containerd[1540]: 2025-09-05 06:02:46.553 [INFO][3808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" HandleID="k8s-pod-network.1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Workload="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.590884 containerd[1540]: 2025-09-05 06:02:46.558 [INFO][3791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--756c6d5ddf--q4w88-eth0", GenerateName:"whisker-756c6d5ddf-", Namespace:"calico-system", SelfLink:"", UID:"2d0695d0-0517-4335-b37c-72afab4dc780", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"756c6d5ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-756c6d5ddf-q4w88", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali13ef5ce88f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:46.590884 containerd[1540]: 2025-09-05 06:02:46.558 [INFO][3791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.590949 containerd[1540]: 2025-09-05 06:02:46.558 [INFO][3791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13ef5ce88f9 ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.590949 containerd[1540]: 2025-09-05 06:02:46.571 [INFO][3791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.590987 containerd[1540]: 2025-09-05 06:02:46.571 [INFO][3791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--756c6d5ddf--q4w88-eth0", GenerateName:"whisker-756c6d5ddf-", Namespace:"calico-system", SelfLink:"", UID:"2d0695d0-0517-4335-b37c-72afab4dc780", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"756c6d5ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa", Pod:"whisker-756c6d5ddf-q4w88", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali13ef5ce88f9", MAC:"e6:47:d2:f5:41:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:46.591032 containerd[1540]: 2025-09-05 06:02:46.583 [INFO][3791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" Namespace="calico-system" Pod="whisker-756c6d5ddf-q4w88" WorkloadEndpoint="localhost-k8s-whisker--756c6d5ddf--q4w88-eth0" Sep 5 06:02:46.666900 containerd[1540]: time="2025-09-05T06:02:46.666801569Z" level=info msg="connecting to shim 1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa" address="unix:///run/containerd/s/7425d6f583cfcabc471f40c9ca46d8a2954a97c78c31b128bc42323a05187ba1" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:46.711400 systemd[1]: Started cri-containerd-1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa.scope - libcontainer container 1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa. Sep 5 06:02:46.746960 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:46.770590 containerd[1540]: time="2025-09-05T06:02:46.770541620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756c6d5ddf-q4w88,Uid:2d0695d0-0517-4335-b37c-72afab4dc780,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa\"" Sep 5 06:02:46.773918 containerd[1540]: time="2025-09-05T06:02:46.773884144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 06:02:46.966798 systemd-networkd[1436]: vxlan.calico: Link UP Sep 5 06:02:46.966804 systemd-networkd[1436]: vxlan.calico: Gained carrier Sep 5 06:02:47.021045 kubelet[2667]: I0905 06:02:47.021011 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:02:47.625669 containerd[1540]: time="2025-09-05T06:02:47.625620251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:47.626979 containerd[1540]: time="2025-09-05T06:02:47.626828772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 06:02:47.627757 containerd[1540]: time="2025-09-05T06:02:47.627729773Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:47.629601 containerd[1540]: time="2025-09-05T06:02:47.629569855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:47.630344 containerd[1540]: time="2025-09-05T06:02:47.630319256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 856.397072ms" Sep 5 06:02:47.630594 containerd[1540]: time="2025-09-05T06:02:47.630418456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 06:02:47.632515 containerd[1540]: time="2025-09-05T06:02:47.632494099Z" level=info msg="CreateContainer within sandbox \"1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 06:02:47.640611 containerd[1540]: time="2025-09-05T06:02:47.640066988Z" level=info msg="Container 2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:47.646187 containerd[1540]: time="2025-09-05T06:02:47.646144675Z" level=info msg="CreateContainer within sandbox \"1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf\"" Sep 5 06:02:47.646754 containerd[1540]: time="2025-09-05T06:02:47.646643116Z" level=info msg="StartContainer for \"2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf\"" Sep 5 06:02:47.647601 containerd[1540]: time="2025-09-05T06:02:47.647545317Z" level=info msg="connecting to shim 2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf" address="unix:///run/containerd/s/7425d6f583cfcabc471f40c9ca46d8a2954a97c78c31b128bc42323a05187ba1" protocol=ttrpc version=3 Sep 5 06:02:47.669345 systemd[1]: Started cri-containerd-2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf.scope - libcontainer container 2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf. Sep 5 06:02:47.714756 containerd[1540]: time="2025-09-05T06:02:47.714723396Z" level=info msg="StartContainer for \"2fe9883417a2758ef66e6770238c532bf5380fe5e9b97bf1793bf030fb3157cf\" returns successfully" Sep 5 06:02:47.717217 containerd[1540]: time="2025-09-05T06:02:47.717104039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 06:02:47.889474 kubelet[2667]: I0905 06:02:47.889268 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07d03b6-b51b-4713-93e9-707285ad93b5" path="/var/lib/kubelet/pods/a07d03b6-b51b-4713-93e9-707285ad93b5/volumes" Sep 5 06:02:48.023223 kubelet[2667]: I0905 06:02:48.023110 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:02:48.049721 containerd[1540]: time="2025-09-05T06:02:48.049673629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487\" id:\"1e24efcfa5638124974e25de098db06fc4c0aefa1bfc6ff6947a70c33afa17be\" pid:4121 exit_status:1 exited_at:{seconds:1757052168 nanos:49377189}" Sep 5 06:02:48.128053 containerd[1540]: time="2025-09-05T06:02:48.128013036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487\" id:\"07e1e37bd495e9fc3a5c0c0c1a1d571bc4033de8185581d8493b1d037a2bca79\" pid:4147 exit_status:1 exited_at:{seconds:1757052168 nanos:127719636}" Sep 5 06:02:48.368417 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Sep 5 06:02:48.497662 systemd-networkd[1436]: cali13ef5ce88f9: Gained IPv6LL Sep 5 06:02:48.953958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924223164.mount: Deactivated successfully. Sep 5 06:02:48.996039 containerd[1540]: time="2025-09-05T06:02:48.995997400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:48.996879 containerd[1540]: time="2025-09-05T06:02:48.996726441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 06:02:48.997597 containerd[1540]: time="2025-09-05T06:02:48.997566042Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:48.999573 containerd[1540]: time="2025-09-05T06:02:48.999538764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:49.000213 containerd[1540]: time="2025-09-05T06:02:49.000177045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.283040646s" Sep 5 06:02:49.000267 containerd[1540]: time="2025-09-05T06:02:49.000216605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 06:02:49.003270 containerd[1540]: time="2025-09-05T06:02:49.003241568Z" level=info msg="CreateContainer within sandbox \"1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 06:02:49.011254 containerd[1540]: time="2025-09-05T06:02:49.010483576Z" level=info msg="Container ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:49.017528 containerd[1540]: time="2025-09-05T06:02:49.017498063Z" level=info msg="CreateContainer within sandbox \"1d989d16959e1f019a85fa945be2630548d4da7d66354998c530e2d4580edcfa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba\"" Sep 5 06:02:49.018227 containerd[1540]: time="2025-09-05T06:02:49.018165544Z" level=info msg="StartContainer for \"ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba\"" Sep 5 06:02:49.029652 containerd[1540]: time="2025-09-05T06:02:49.029623916Z" level=info msg="connecting to shim ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba" address="unix:///run/containerd/s/7425d6f583cfcabc471f40c9ca46d8a2954a97c78c31b128bc42323a05187ba1" protocol=ttrpc version=3 Sep 5 06:02:49.052347 systemd[1]: Started cri-containerd-ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba.scope - libcontainer container ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba. Sep 5 06:02:49.084322 containerd[1540]: time="2025-09-05T06:02:49.084289053Z" level=info msg="StartContainer for \"ec42b67d2009474c9855e69dc4a31cde58d70fe476cc3feb1b11195f35b4feba\" returns successfully" Sep 5 06:02:52.887504 containerd[1540]: time="2025-09-05T06:02:52.887460178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46zb7,Uid:cf96125f-d73a-4df4-8958-6a721d1e7275,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:53.029824 systemd-networkd[1436]: calieecc3907a42: Link UP Sep 5 06:02:53.030266 systemd-networkd[1436]: calieecc3907a42: Gained carrier Sep 5 06:02:53.047284 containerd[1540]: 2025-09-05 06:02:52.930 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--46zb7-eth0 csi-node-driver- calico-system cf96125f-d73a-4df4-8958-6a721d1e7275 643 0 2025-09-05 06:02:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-46zb7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieecc3907a42 [] [] }} ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-" Sep 5 06:02:53.047284 containerd[1540]: 2025-09-05 06:02:52.931 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.047284 containerd[1540]: 2025-09-05 06:02:52.962 [INFO][4231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" HandleID="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Workload="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:52.962 [INFO][4231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" HandleID="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Workload="localhost-k8s-csi--node--driver--46zb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-46zb7", "timestamp":"2025-09-05 06:02:52.962381722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:52.962 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:52.962 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:52.962 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:52.978 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" host="localhost" Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:52.991 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:53.000 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:53.004 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:53.007 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:53.047491 containerd[1540]: 2025-09-05 06:02:53.007 [INFO][4231] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" host="localhost" Sep 5 06:02:53.047684 containerd[1540]: 2025-09-05 06:02:53.012 [INFO][4231] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3 Sep 5 06:02:53.047684 containerd[1540]: 2025-09-05 06:02:53.019 [INFO][4231] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" host="localhost" Sep 5 06:02:53.047684 containerd[1540]: 2025-09-05 06:02:53.025 [INFO][4231] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" host="localhost" Sep 5 06:02:53.047684 containerd[1540]: 2025-09-05 06:02:53.025 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" host="localhost" Sep 5 06:02:53.047684 containerd[1540]: 2025-09-05 06:02:53.025 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:53.047684 containerd[1540]: 2025-09-05 06:02:53.025 [INFO][4231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" HandleID="k8s-pod-network.06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Workload="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.047790 containerd[1540]: 2025-09-05 06:02:53.027 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--46zb7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cf96125f-d73a-4df4-8958-6a721d1e7275", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-46zb7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieecc3907a42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:53.047845 containerd[1540]: 2025-09-05 06:02:53.027 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.047845 containerd[1540]: 2025-09-05 06:02:53.028 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieecc3907a42 ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.047845 containerd[1540]: 2025-09-05 06:02:53.030 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.047948 containerd[1540]: 2025-09-05 06:02:53.031 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--46zb7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cf96125f-d73a-4df4-8958-6a721d1e7275", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3", Pod:"csi-node-driver-46zb7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieecc3907a42", MAC:"72:f1:69:9a:b6:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:53.047997 containerd[1540]: 2025-09-05 06:02:53.043 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" Namespace="calico-system" Pod="csi-node-driver-46zb7" WorkloadEndpoint="localhost-k8s-csi--node--driver--46zb7-eth0" Sep 5 06:02:53.053229 kubelet[2667]: I0905 06:02:53.052466 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-756c6d5ddf-q4w88" podStartSLOduration=4.824606254 podStartE2EDuration="7.052450357s" podCreationTimestamp="2025-09-05 06:02:46 +0000 UTC" firstStartedPulling="2025-09-05 06:02:46.773090383 +0000 UTC m=+32.974094180" lastFinishedPulling="2025-09-05 06:02:49.000934486 +0000 UTC m=+35.201938283" observedRunningTime="2025-09-05 06:02:50.04532857 +0000 UTC m=+36.246332407" watchObservedRunningTime="2025-09-05 06:02:53.052450357 +0000 UTC m=+39.253454154" Sep 5 06:02:53.070784 containerd[1540]: time="2025-09-05T06:02:53.070692851Z" level=info msg="connecting to shim 06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3" address="unix:///run/containerd/s/576e7cd7d5d3307cb7d324ea9f8a6e6dfd7523aef308d0d3695f7b4b4aae2cb8" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:53.098430 systemd[1]: Started cri-containerd-06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3.scope - libcontainer container 06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3. Sep 5 06:02:53.109309 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:53.128122 containerd[1540]: time="2025-09-05T06:02:53.128006498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46zb7,Uid:cf96125f-d73a-4df4-8958-6a721d1e7275,Namespace:calico-system,Attempt:0,} returns sandbox id \"06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3\"" Sep 5 06:02:53.130457 containerd[1540]: time="2025-09-05T06:02:53.130404379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 06:02:53.421834 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:54946.service - OpenSSH per-connection server daemon (10.0.0.1:54946). Sep 5 06:02:53.488425 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 54946 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:02:53.490085 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:02:53.494270 systemd-logind[1516]: New session 8 of user core. Sep 5 06:02:53.501404 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 06:02:53.683591 sshd[4303]: Connection closed by 10.0.0.1 port 54946 Sep 5 06:02:53.683846 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 5 06:02:53.688092 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:54946.service: Deactivated successfully. Sep 5 06:02:53.689882 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 06:02:53.692268 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Sep 5 06:02:53.693628 systemd-logind[1516]: Removed session 8. Sep 5 06:02:53.888013 containerd[1540]: time="2025-09-05T06:02:53.887971549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c78c446b-p45vd,Uid:4da8305a-e7a9-4d0c-8e11-a4992d816014,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:53.888388 containerd[1540]: time="2025-09-05T06:02:53.887986709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bc62q,Uid:8693afd1-cf6f-4d9b-a24f-ad4d05639b9c,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:54.029084 systemd-networkd[1436]: cali1a2dd534638: Link UP Sep 5 06:02:54.031567 systemd-networkd[1436]: cali1a2dd534638: Gained carrier Sep 5 06:02:54.050341 containerd[1540]: 2025-09-05 06:02:53.943 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0 calico-kube-controllers-9c78c446b- calico-system 4da8305a-e7a9-4d0c-8e11-a4992d816014 786 0 2025-09-05 06:02:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9c78c446b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9c78c446b-p45vd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1a2dd534638 [] [] }} ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-" Sep 5 06:02:54.050341 containerd[1540]: 2025-09-05 06:02:53.943 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.050341 containerd[1540]: 2025-09-05 06:02:53.979 [INFO][4350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" HandleID="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Workload="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:53.979 [INFO][4350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" HandleID="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Workload="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9c78c446b-p45vd", "timestamp":"2025-09-05 06:02:53.979453662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:53.979 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:53.980 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:53.980 [INFO][4350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:53.990 [INFO][4350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" host="localhost" Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:53.995 [INFO][4350] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:54.001 [INFO][4350] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:54.003 [INFO][4350] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:54.006 [INFO][4350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:54.050540 containerd[1540]: 2025-09-05 06:02:54.006 [INFO][4350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" host="localhost" Sep 5 06:02:54.050743 containerd[1540]: 2025-09-05 06:02:54.008 [INFO][4350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e Sep 5 06:02:54.050743 containerd[1540]: 2025-09-05 06:02:54.013 [INFO][4350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" host="localhost" Sep 5 06:02:54.050743 containerd[1540]: 2025-09-05 06:02:54.020 [INFO][4350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" host="localhost" Sep 5 06:02:54.050743 containerd[1540]: 2025-09-05 06:02:54.020 [INFO][4350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" host="localhost" Sep 5 06:02:54.050743 containerd[1540]: 2025-09-05 06:02:54.020 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:54.050743 containerd[1540]: 2025-09-05 06:02:54.020 [INFO][4350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" HandleID="k8s-pod-network.a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Workload="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.050850 containerd[1540]: 2025-09-05 06:02:54.024 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0", GenerateName:"calico-kube-controllers-9c78c446b-", Namespace:"calico-system", SelfLink:"", UID:"4da8305a-e7a9-4d0c-8e11-a4992d816014", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9c78c446b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9c78c446b-p45vd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a2dd534638", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:54.050898 containerd[1540]: 2025-09-05 06:02:54.024 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.050898 containerd[1540]: 2025-09-05 06:02:54.025 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a2dd534638 ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.050898 containerd[1540]: 2025-09-05 06:02:54.029 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.050955 containerd[1540]: 2025-09-05 06:02:54.032 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0", GenerateName:"calico-kube-controllers-9c78c446b-", Namespace:"calico-system", SelfLink:"", UID:"4da8305a-e7a9-4d0c-8e11-a4992d816014", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9c78c446b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e", Pod:"calico-kube-controllers-9c78c446b-p45vd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a2dd534638", MAC:"36:41:ec:35:12:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:54.051001 containerd[1540]: 2025-09-05 06:02:54.045 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" Namespace="calico-system" Pod="calico-kube-controllers-9c78c446b-p45vd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c78c446b--p45vd-eth0" Sep 5 06:02:54.077180 containerd[1540]: time="2025-09-05T06:02:54.077116897Z" level=info msg="connecting to shim a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e" address="unix:///run/containerd/s/f4600b4edc0325bd5cb37a1e8c48d916f754ad8fa55091f905b7255efc6e5e26" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:54.085612 containerd[1540]: time="2025-09-05T06:02:54.085557983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:54.091144 containerd[1540]: time="2025-09-05T06:02:54.091089707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 06:02:54.092995 containerd[1540]: time="2025-09-05T06:02:54.092951149Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:54.099787 containerd[1540]: time="2025-09-05T06:02:54.099719554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:54.100677 containerd[1540]: time="2025-09-05T06:02:54.100477315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 970.019696ms" Sep 5 06:02:54.100677 containerd[1540]: time="2025-09-05T06:02:54.100508075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 06:02:54.109057 containerd[1540]: time="2025-09-05T06:02:54.109017641Z" level=info msg="CreateContainer within sandbox \"06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 06:02:54.136156 containerd[1540]: time="2025-09-05T06:02:54.135920181Z" level=info msg="Container ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:54.140325 systemd-networkd[1436]: cali40f927f46f4: Link UP Sep 5 06:02:54.141317 systemd-networkd[1436]: cali40f927f46f4: Gained carrier Sep 5 06:02:54.159247 containerd[1540]: 2025-09-05 06:02:53.958 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0 coredns-7c65d6cfc9- kube-system 8693afd1-cf6f-4d9b-a24f-ad4d05639b9c 782 0 2025-09-05 06:02:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bc62q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40f927f46f4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-" Sep 5 06:02:54.159247 containerd[1540]: 2025-09-05 06:02:53.958 [INFO][4328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.159247 containerd[1540]: 2025-09-05 06:02:54.003 [INFO][4356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" HandleID="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Workload="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.003 [INFO][4356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" HandleID="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Workload="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3d40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bc62q", "timestamp":"2025-09-05 06:02:54.003314921 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.003 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.021 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.021 [INFO][4356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.092 [INFO][4356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" host="localhost" Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.100 [INFO][4356] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.108 [INFO][4356] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.110 [INFO][4356] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.112 [INFO][4356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:54.159446 containerd[1540]: 2025-09-05 06:02:54.112 [INFO][4356] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" host="localhost" Sep 5 06:02:54.159634 containerd[1540]: 2025-09-05 06:02:54.114 [INFO][4356] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429 Sep 5 06:02:54.159634 containerd[1540]: 2025-09-05 06:02:54.118 [INFO][4356] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" host="localhost" Sep 5 06:02:54.159634 containerd[1540]: 2025-09-05 06:02:54.127 [INFO][4356] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" host="localhost" Sep 5 06:02:54.159634 containerd[1540]: 2025-09-05 06:02:54.127 [INFO][4356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" host="localhost" Sep 5 06:02:54.159634 containerd[1540]: 2025-09-05 06:02:54.127 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:54.159634 containerd[1540]: 2025-09-05 06:02:54.128 [INFO][4356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" HandleID="k8s-pod-network.25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Workload="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.159739 containerd[1540]: 2025-09-05 06:02:54.134 [INFO][4328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8693afd1-cf6f-4d9b-a24f-ad4d05639b9c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bc62q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40f927f46f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:54.159802 containerd[1540]: 2025-09-05 06:02:54.136 [INFO][4328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.159802 containerd[1540]: 2025-09-05 06:02:54.136 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40f927f46f4 ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.159802 containerd[1540]: 2025-09-05 06:02:54.140 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.159855 containerd[1540]: 2025-09-05 06:02:54.141 [INFO][4328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8693afd1-cf6f-4d9b-a24f-ad4d05639b9c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429", Pod:"coredns-7c65d6cfc9-bc62q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40f927f46f4", MAC:"d6:26:f1:b9:c2:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:54.159855 containerd[1540]: 2025-09-05 06:02:54.154 [INFO][4328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bc62q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bc62q-eth0" Sep 5 06:02:54.169468 systemd[1]: Started cri-containerd-a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e.scope - libcontainer container a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e. Sep 5 06:02:54.187111 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:54.188551 containerd[1540]: time="2025-09-05T06:02:54.188508301Z" level=info msg="CreateContainer within sandbox \"06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb\"" Sep 5 06:02:54.190924 containerd[1540]: time="2025-09-05T06:02:54.189540102Z" level=info msg="StartContainer for \"ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb\"" Sep 5 06:02:54.194541 containerd[1540]: time="2025-09-05T06:02:54.194362945Z" level=info msg="connecting to shim ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb" address="unix:///run/containerd/s/576e7cd7d5d3307cb7d324ea9f8a6e6dfd7523aef308d0d3695f7b4b4aae2cb8" protocol=ttrpc version=3 Sep 5 06:02:54.200823 containerd[1540]: time="2025-09-05T06:02:54.200730950Z" level=info msg="connecting to shim 25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429" address="unix:///run/containerd/s/e3508e6a28f3e0fe5e96c67c9374885d43dc297ba53d38365e4388674d358c0c" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:54.222760 systemd[1]: Started cri-containerd-ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb.scope - libcontainer container ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb. Sep 5 06:02:54.226390 systemd[1]: Started cri-containerd-25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429.scope - libcontainer container 25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429. Sep 5 06:02:54.230001 containerd[1540]: time="2025-09-05T06:02:54.229906932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c78c446b-p45vd,Uid:4da8305a-e7a9-4d0c-8e11-a4992d816014,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e\"" Sep 5 06:02:54.233079 containerd[1540]: time="2025-09-05T06:02:54.233046454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 06:02:54.240189 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:54.267988 containerd[1540]: time="2025-09-05T06:02:54.267769281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bc62q,Uid:8693afd1-cf6f-4d9b-a24f-ad4d05639b9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429\"" Sep 5 06:02:54.271924 containerd[1540]: time="2025-09-05T06:02:54.271882204Z" level=info msg="CreateContainer within sandbox \"25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:02:54.278287 containerd[1540]: time="2025-09-05T06:02:54.278244649Z" level=info msg="StartContainer for \"ce385a0741463e814b44d320d35bcb0c680dc494e59cb2b3cfae16e196c226fb\" returns successfully" Sep 5 06:02:54.285517 containerd[1540]: time="2025-09-05T06:02:54.285286174Z" level=info msg="Container 645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:54.293566 containerd[1540]: time="2025-09-05T06:02:54.293520460Z" level=info msg="CreateContainer within sandbox \"25193950c6f9ea60d986bc13ef5739881535ab4055593cf50e273ee5e6a2b429\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c\"" Sep 5 06:02:54.294721 containerd[1540]: time="2025-09-05T06:02:54.294668621Z" level=info msg="StartContainer for \"645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c\"" Sep 5 06:02:54.296066 containerd[1540]: time="2025-09-05T06:02:54.296025782Z" level=info msg="connecting to shim 645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c" address="unix:///run/containerd/s/e3508e6a28f3e0fe5e96c67c9374885d43dc297ba53d38365e4388674d358c0c" protocol=ttrpc version=3 Sep 5 06:02:54.329512 systemd[1]: Started cri-containerd-645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c.scope - libcontainer container 645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c. Sep 5 06:02:54.355083 containerd[1540]: time="2025-09-05T06:02:54.354935226Z" level=info msg="StartContainer for \"645a5928050cd7eda0c0f8c79b3aba4fe64dc62f46d556b183a71c01bb814b7c\" returns successfully" Sep 5 06:02:54.886882 containerd[1540]: time="2025-09-05T06:02:54.886828867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-ft6sk,Uid:e019660a-77f2-4342-88a6-1b159fdd9044,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:02:54.887404 containerd[1540]: time="2025-09-05T06:02:54.887284508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2q67,Uid:7ef244c2-37ee-4de1-9971-98c536e0997f,Namespace:kube-system,Attempt:0,}" Sep 5 06:02:54.898360 systemd-networkd[1436]: calieecc3907a42: Gained IPv6LL Sep 5 06:02:54.907007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785831047.mount: Deactivated successfully. Sep 5 06:02:55.012625 systemd-networkd[1436]: cali5b6148e27e9: Link UP Sep 5 06:02:55.012771 systemd-networkd[1436]: cali5b6148e27e9: Gained carrier Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.936 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0 coredns-7c65d6cfc9- kube-system 7ef244c2-37ee-4de1-9971-98c536e0997f 775 0 2025-09-05 06:02:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-r2q67 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b6148e27e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.937 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.968 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" HandleID="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Workload="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.968 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" HandleID="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Workload="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-r2q67", "timestamp":"2025-09-05 06:02:54.968447889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.968 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.968 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.968 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.979 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.984 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.989 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.991 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.993 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.993 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.995 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872 Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:54.998 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:55.006 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:55.006 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" host="localhost" Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:55.006 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:55.028777 containerd[1540]: 2025-09-05 06:02:55.006 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" HandleID="k8s-pod-network.5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Workload="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.029894 containerd[1540]: 2025-09-05 06:02:55.009 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ef244c2-37ee-4de1-9971-98c536e0997f", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-r2q67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b6148e27e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:55.029894 containerd[1540]: 2025-09-05 06:02:55.009 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.029894 containerd[1540]: 2025-09-05 06:02:55.009 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b6148e27e9 ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.029894 containerd[1540]: 2025-09-05 06:02:55.011 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.029894 containerd[1540]: 2025-09-05 06:02:55.014 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ef244c2-37ee-4de1-9971-98c536e0997f", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872", Pod:"coredns-7c65d6cfc9-r2q67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b6148e27e9", MAC:"32:f7:fc:0c:e3:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:55.029894 containerd[1540]: 2025-09-05 06:02:55.024 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2q67" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2q67-eth0" Sep 5 06:02:55.054386 containerd[1540]: time="2025-09-05T06:02:55.054343671Z" level=info msg="connecting to shim 5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872" address="unix:///run/containerd/s/3fb309f40654a7c9005c7354e5d4becb2b099f879b878ff6aa726931eb10e876" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:55.079619 kubelet[2667]: I0905 06:02:55.079548 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bc62q" podStartSLOduration=35.079528249 podStartE2EDuration="35.079528249s" podCreationTimestamp="2025-09-05 06:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:02:55.079077609 +0000 UTC m=+41.280081446" watchObservedRunningTime="2025-09-05 06:02:55.079528249 +0000 UTC m=+41.280532006" Sep 5 06:02:55.114875 systemd[1]: Started cri-containerd-5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872.scope - libcontainer container 5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872. Sep 5 06:02:55.135728 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:55.142416 systemd-networkd[1436]: cali74bbde239d0: Link UP Sep 5 06:02:55.143125 systemd-networkd[1436]: cali74bbde239d0: Gained carrier Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:54.941 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0 calico-apiserver-55c67bb578- calico-apiserver e019660a-77f2-4342-88a6-1b159fdd9044 779 0 2025-09-05 06:02:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55c67bb578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55c67bb578-ft6sk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74bbde239d0 [] [] }} ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:54.941 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:54.971 [INFO][4574] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" HandleID="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Workload="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:54.971 [INFO][4574] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" HandleID="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Workload="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a36f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55c67bb578-ft6sk", "timestamp":"2025-09-05 06:02:54.971319491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:54.971 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.006 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.006 [INFO][4574] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.084 [INFO][4574] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.090 [INFO][4574] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.102 [INFO][4574] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.106 [INFO][4574] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.116 [INFO][4574] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.117 [INFO][4574] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.119 [INFO][4574] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052 Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.127 [INFO][4574] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.134 [INFO][4574] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.135 [INFO][4574] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" host="localhost" Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.135 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:55.165807 containerd[1540]: 2025-09-05 06:02:55.135 [INFO][4574] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" HandleID="k8s-pod-network.eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Workload="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.171105 containerd[1540]: 2025-09-05 06:02:55.139 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0", GenerateName:"calico-apiserver-55c67bb578-", Namespace:"calico-apiserver", SelfLink:"", UID:"e019660a-77f2-4342-88a6-1b159fdd9044", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c67bb578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55c67bb578-ft6sk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74bbde239d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:55.171105 containerd[1540]: 2025-09-05 06:02:55.139 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.171105 containerd[1540]: 2025-09-05 06:02:55.139 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74bbde239d0 ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.171105 containerd[1540]: 2025-09-05 06:02:55.143 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.171105 containerd[1540]: 2025-09-05 06:02:55.145 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0", GenerateName:"calico-apiserver-55c67bb578-", Namespace:"calico-apiserver", SelfLink:"", UID:"e019660a-77f2-4342-88a6-1b159fdd9044", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c67bb578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052", Pod:"calico-apiserver-55c67bb578-ft6sk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74bbde239d0", MAC:"e2:4a:fa:10:88:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:55.171105 containerd[1540]: 2025-09-05 06:02:55.156 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-ft6sk" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--ft6sk-eth0" Sep 5 06:02:55.196730 containerd[1540]: time="2025-09-05T06:02:55.196646532Z" level=info msg="connecting to shim eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052" address="unix:///run/containerd/s/c3089ff5809303eff545e26fe22ab50a7b479439f661524afe9ef24692795267" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:55.204407 containerd[1540]: time="2025-09-05T06:02:55.201260175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2q67,Uid:7ef244c2-37ee-4de1-9971-98c536e0997f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872\"" Sep 5 06:02:55.205144 containerd[1540]: time="2025-09-05T06:02:55.205095698Z" level=info msg="CreateContainer within sandbox \"5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:02:55.231586 containerd[1540]: time="2025-09-05T06:02:55.230819356Z" level=info msg="Container f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:55.239465 systemd[1]: Started cri-containerd-eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052.scope - libcontainer container eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052. Sep 5 06:02:55.240626 containerd[1540]: time="2025-09-05T06:02:55.240582323Z" level=info msg="CreateContainer within sandbox \"5eab00a34914baf70c9d0fe0f1b205126a8049a3df86f049357cc5bab702f872\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c\"" Sep 5 06:02:55.242289 containerd[1540]: time="2025-09-05T06:02:55.241550563Z" level=info msg="StartContainer for \"f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c\"" Sep 5 06:02:55.243954 containerd[1540]: time="2025-09-05T06:02:55.243913565Z" level=info msg="connecting to shim f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c" address="unix:///run/containerd/s/3fb309f40654a7c9005c7354e5d4becb2b099f879b878ff6aa726931eb10e876" protocol=ttrpc version=3 Sep 5 06:02:55.261991 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:55.280448 systemd[1]: Started cri-containerd-f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c.scope - libcontainer container f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c. Sep 5 06:02:55.345100 containerd[1540]: time="2025-09-05T06:02:55.344979077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-ft6sk,Uid:e019660a-77f2-4342-88a6-1b159fdd9044,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052\"" Sep 5 06:02:55.347067 containerd[1540]: time="2025-09-05T06:02:55.347021678Z" level=info msg="StartContainer for \"f6800a0dd82ba2401bcc9bd4bc9b65fe40ffc40d442223481f1797a591100c1c\" returns successfully" Sep 5 06:02:55.409581 systemd-networkd[1436]: cali1a2dd534638: Gained IPv6LL Sep 5 06:02:55.824819 containerd[1540]: time="2025-09-05T06:02:55.824749416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:55.825259 containerd[1540]: time="2025-09-05T06:02:55.825237216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 06:02:55.826143 containerd[1540]: time="2025-09-05T06:02:55.826081737Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:55.828252 containerd[1540]: time="2025-09-05T06:02:55.828183018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:55.829144 containerd[1540]: time="2025-09-05T06:02:55.828857019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.595768844s" Sep 5 06:02:55.829144 containerd[1540]: time="2025-09-05T06:02:55.828894699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 06:02:55.830801 containerd[1540]: time="2025-09-05T06:02:55.830765300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 06:02:55.838015 containerd[1540]: time="2025-09-05T06:02:55.837977625Z" level=info msg="CreateContainer within sandbox \"a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 06:02:55.845036 containerd[1540]: time="2025-09-05T06:02:55.845000110Z" level=info msg="Container efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:55.853212 containerd[1540]: time="2025-09-05T06:02:55.852631595Z" level=info msg="CreateContainer within sandbox \"a6408f90f5c67d5937d53dedf87a7eaf0b4aa874fb48125e289dc090b06fdb7e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\"" Sep 5 06:02:55.853679 containerd[1540]: time="2025-09-05T06:02:55.853643076Z" level=info msg="StartContainer for \"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\"" Sep 5 06:02:55.855237 containerd[1540]: time="2025-09-05T06:02:55.855175757Z" level=info msg="connecting to shim efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336" address="unix:///run/containerd/s/f4600b4edc0325bd5cb37a1e8c48d916f754ad8fa55091f905b7255efc6e5e26" protocol=ttrpc version=3 Sep 5 06:02:55.883436 systemd[1]: Started cri-containerd-efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336.scope - libcontainer container efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336. Sep 5 06:02:55.887157 containerd[1540]: time="2025-09-05T06:02:55.887087500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-krsv8,Uid:ee39782f-381e-4ec7-a3ce-308afbb7868d,Namespace:calico-system,Attempt:0,}" Sep 5 06:02:55.890410 containerd[1540]: time="2025-09-05T06:02:55.890356262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-k765k,Uid:eef48bcf-2bf3-4c04-9956-6120276d01a9,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:02:55.935004 containerd[1540]: time="2025-09-05T06:02:55.934965734Z" level=info msg="StartContainer for \"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\" returns successfully" Sep 5 06:02:56.044636 systemd-networkd[1436]: calia919d2ecba3: Link UP Sep 5 06:02:56.044805 systemd-networkd[1436]: calia919d2ecba3: Gained carrier Sep 5 06:02:56.049337 systemd-networkd[1436]: cali40f927f46f4: Gained IPv6LL Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.949 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--krsv8-eth0 goldmane-7988f88666- calico-system ee39782f-381e-4ec7-a3ce-308afbb7868d 783 0 2025-09-05 06:02:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-krsv8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia919d2ecba3 [] [] }} ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.949 [INFO][4760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.977 [INFO][4811] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" HandleID="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Workload="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.978 [INFO][4811] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" HandleID="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Workload="localhost-k8s-goldmane--7988f88666--krsv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-krsv8", "timestamp":"2025-09-05 06:02:55.977831724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.978 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.978 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.978 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:55.993 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.005 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.011 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.014 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.017 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.017 [INFO][4811] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.020 [INFO][4811] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71 Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.028 [INFO][4811] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.037 [INFO][4811] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.037 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" host="localhost" Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.037 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:56.067230 containerd[1540]: 2025-09-05 06:02:56.037 [INFO][4811] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" HandleID="k8s-pod-network.6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Workload="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.068121 containerd[1540]: 2025-09-05 06:02:56.042 [INFO][4760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--krsv8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ee39782f-381e-4ec7-a3ce-308afbb7868d", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-krsv8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia919d2ecba3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:56.068121 containerd[1540]: 2025-09-05 06:02:56.042 [INFO][4760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.068121 containerd[1540]: 2025-09-05 06:02:56.042 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia919d2ecba3 ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.068121 containerd[1540]: 2025-09-05 06:02:56.044 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.068121 containerd[1540]: 2025-09-05 06:02:56.047 [INFO][4760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--krsv8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ee39782f-381e-4ec7-a3ce-308afbb7868d", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71", Pod:"goldmane-7988f88666-krsv8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia919d2ecba3", MAC:"96:35:18:3e:67:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:56.068121 containerd[1540]: 2025-09-05 06:02:56.062 [INFO][4760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" Namespace="calico-system" Pod="goldmane-7988f88666-krsv8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--krsv8-eth0" Sep 5 06:02:56.098932 kubelet[2667]: I0905 06:02:56.098818 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9c78c446b-p45vd" podStartSLOduration=20.50067928 podStartE2EDuration="22.098799125s" podCreationTimestamp="2025-09-05 06:02:34 +0000 UTC" firstStartedPulling="2025-09-05 06:02:54.231820974 +0000 UTC m=+40.432824771" lastFinishedPulling="2025-09-05 06:02:55.829940819 +0000 UTC m=+42.030944616" observedRunningTime="2025-09-05 06:02:56.098719525 +0000 UTC m=+42.299723322" watchObservedRunningTime="2025-09-05 06:02:56.098799125 +0000 UTC m=+42.299802922" Sep 5 06:02:56.115796 kubelet[2667]: I0905 06:02:56.115732 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r2q67" podStartSLOduration=36.115712976 podStartE2EDuration="36.115712976s" podCreationTimestamp="2025-09-05 06:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:02:56.115685216 +0000 UTC m=+42.316689013" watchObservedRunningTime="2025-09-05 06:02:56.115712976 +0000 UTC m=+42.316716773" Sep 5 06:02:56.119852 containerd[1540]: time="2025-09-05T06:02:56.119803339Z" level=info msg="connecting to shim 6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71" address="unix:///run/containerd/s/8d1e9501881efbe2919a6f699723fa65641781c19e53e49c92e5aa26d55f5b5b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:56.168427 systemd-networkd[1436]: caliaad44d4eab1: Link UP Sep 5 06:02:56.169371 systemd-networkd[1436]: caliaad44d4eab1: Gained carrier Sep 5 06:02:56.169661 systemd[1]: Started cri-containerd-6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71.scope - libcontainer container 6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71. Sep 5 06:02:56.186736 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:55.955 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0 calico-apiserver-55c67bb578- calico-apiserver eef48bcf-2bf3-4c04-9956-6120276d01a9 784 0 2025-09-05 06:02:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55c67bb578 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55c67bb578-k765k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaad44d4eab1 [] [] }} ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:55.956 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:55.998 [INFO][4823] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" HandleID="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Workload="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:55.998 [INFO][4823] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" HandleID="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Workload="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55c67bb578-k765k", "timestamp":"2025-09-05 06:02:55.998673299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:55.998 [INFO][4823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.037 [INFO][4823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.037 [INFO][4823] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.098 [INFO][4823] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.107 [INFO][4823] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.117 [INFO][4823] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.123 [INFO][4823] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.130 [INFO][4823] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.130 [INFO][4823] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.132 [INFO][4823] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70 Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.139 [INFO][4823] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.148 [INFO][4823] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.149 [INFO][4823] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" host="localhost" Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.149 [INFO][4823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:02:56.188596 containerd[1540]: 2025-09-05 06:02:56.149 [INFO][4823] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" HandleID="k8s-pod-network.77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Workload="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.189312 containerd[1540]: 2025-09-05 06:02:56.155 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0", GenerateName:"calico-apiserver-55c67bb578-", Namespace:"calico-apiserver", SelfLink:"", UID:"eef48bcf-2bf3-4c04-9956-6120276d01a9", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c67bb578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55c67bb578-k765k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaad44d4eab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:56.189312 containerd[1540]: 2025-09-05 06:02:56.155 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.189312 containerd[1540]: 2025-09-05 06:02:56.156 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaad44d4eab1 ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.189312 containerd[1540]: 2025-09-05 06:02:56.170 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.189312 containerd[1540]: 2025-09-05 06:02:56.170 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0", GenerateName:"calico-apiserver-55c67bb578-", Namespace:"calico-apiserver", SelfLink:"", UID:"eef48bcf-2bf3-4c04-9956-6120276d01a9", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55c67bb578", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70", Pod:"calico-apiserver-55c67bb578-k765k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaad44d4eab1", MAC:"ea:d8:9a:d5:93:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:02:56.189312 containerd[1540]: 2025-09-05 06:02:56.184 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" Namespace="calico-apiserver" Pod="calico-apiserver-55c67bb578-k765k" WorkloadEndpoint="localhost-k8s-calico--apiserver--55c67bb578--k765k-eth0" Sep 5 06:02:56.217561 containerd[1540]: time="2025-09-05T06:02:56.217520604Z" level=info msg="connecting to shim 77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70" address="unix:///run/containerd/s/8b8d414026880c854c3c001a45565d6114085e7ce06ea05629ee54aebc481729" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:02:56.218293 containerd[1540]: time="2025-09-05T06:02:56.218190004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-krsv8,Uid:ee39782f-381e-4ec7-a3ce-308afbb7868d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71\"" Sep 5 06:02:56.247428 systemd[1]: Started cri-containerd-77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70.scope - libcontainer container 77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70. Sep 5 06:02:56.258345 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:02:56.279235 containerd[1540]: time="2025-09-05T06:02:56.279166885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55c67bb578-k765k,Uid:eef48bcf-2bf3-4c04-9956-6120276d01a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70\"" Sep 5 06:02:56.560348 systemd-networkd[1436]: cali74bbde239d0: Gained IPv6LL Sep 5 06:02:56.743620 containerd[1540]: time="2025-09-05T06:02:56.743555032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\" id:\"30a6b2c325f59f9b7e152bcc528c67de4db1d150cdd6773931f6b5aca5a0b542\" pid:4968 exited_at:{seconds:1757052176 nanos:742170031}" Sep 5 06:02:56.779901 containerd[1540]: time="2025-09-05T06:02:56.779856496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\" id:\"8868bc03e5d0de3ff3cd11c98ab0fa4c4ca17c8959c499c4f1e0e4da2f5d9e73\" pid:4990 exited_at:{seconds:1757052176 nanos:779611776}" Sep 5 06:02:56.847177 containerd[1540]: time="2025-09-05T06:02:56.846570620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:56.848270 containerd[1540]: time="2025-09-05T06:02:56.848217942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 06:02:56.849011 containerd[1540]: time="2025-09-05T06:02:56.848920102Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:56.852222 containerd[1540]: time="2025-09-05T06:02:56.851842424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:56.852407 containerd[1540]: time="2025-09-05T06:02:56.852381584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.021577684s" Sep 5 06:02:56.852488 containerd[1540]: time="2025-09-05T06:02:56.852466424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 06:02:56.854401 containerd[1540]: time="2025-09-05T06:02:56.854368186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:02:56.863055 containerd[1540]: time="2025-09-05T06:02:56.863015791Z" level=info msg="CreateContainer within sandbox \"06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 06:02:56.875220 containerd[1540]: time="2025-09-05T06:02:56.873432318Z" level=info msg="Container 04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:56.888837 containerd[1540]: time="2025-09-05T06:02:56.888780448Z" level=info msg="CreateContainer within sandbox \"06ddf9176aa2ab9481201f32935b5e5c0989c6859655ed0ed8599719195d06a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6\"" Sep 5 06:02:56.891236 containerd[1540]: time="2025-09-05T06:02:56.890389410Z" level=info msg="StartContainer for \"04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6\"" Sep 5 06:02:56.892261 containerd[1540]: time="2025-09-05T06:02:56.892232251Z" level=info msg="connecting to shim 04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6" address="unix:///run/containerd/s/576e7cd7d5d3307cb7d324ea9f8a6e6dfd7523aef308d0d3695f7b4b4aae2cb8" protocol=ttrpc version=3 Sep 5 06:02:56.918392 systemd[1]: Started cri-containerd-04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6.scope - libcontainer container 04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6. Sep 5 06:02:56.944363 systemd-networkd[1436]: cali5b6148e27e9: Gained IPv6LL Sep 5 06:02:56.961706 containerd[1540]: time="2025-09-05T06:02:56.961592817Z" level=info msg="StartContainer for \"04c1a99f8af9d0d4a011e19f43f8a021695fd02a6c7ec79aa49c98ecfcae54a6\" returns successfully" Sep 5 06:02:57.108530 kubelet[2667]: I0905 06:02:57.108371 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-46zb7" podStartSLOduration=19.384385583 podStartE2EDuration="23.108349829s" podCreationTimestamp="2025-09-05 06:02:34 +0000 UTC" firstStartedPulling="2025-09-05 06:02:53.129242059 +0000 UTC m=+39.330245816" lastFinishedPulling="2025-09-05 06:02:56.853206265 +0000 UTC m=+43.054210062" observedRunningTime="2025-09-05 06:02:57.107498589 +0000 UTC m=+43.308502426" watchObservedRunningTime="2025-09-05 06:02:57.108349829 +0000 UTC m=+43.309353626" Sep 5 06:02:57.905331 systemd-networkd[1436]: calia919d2ecba3: Gained IPv6LL Sep 5 06:02:57.958594 kubelet[2667]: I0905 06:02:57.958560 2667 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 06:02:57.964496 kubelet[2667]: I0905 06:02:57.964472 2667 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 06:02:58.160464 systemd-networkd[1436]: caliaad44d4eab1: Gained IPv6LL Sep 5 06:02:58.184320 containerd[1540]: time="2025-09-05T06:02:58.184171411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:58.184915 containerd[1540]: time="2025-09-05T06:02:58.184699731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 06:02:58.186345 containerd[1540]: time="2025-09-05T06:02:58.186291852Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:58.188215 containerd[1540]: time="2025-09-05T06:02:58.188175053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:58.188920 containerd[1540]: time="2025-09-05T06:02:58.188754293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.334349227s" Sep 5 06:02:58.188920 containerd[1540]: time="2025-09-05T06:02:58.188783013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 06:02:58.190968 containerd[1540]: time="2025-09-05T06:02:58.190939695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 06:02:58.193911 containerd[1540]: time="2025-09-05T06:02:58.193868376Z" level=info msg="CreateContainer within sandbox \"eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:02:58.200541 containerd[1540]: time="2025-09-05T06:02:58.199808860Z" level=info msg="Container 9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:58.206877 containerd[1540]: time="2025-09-05T06:02:58.206784504Z" level=info msg="CreateContainer within sandbox \"eb45f9858c74ef70dd8d0d01926e09e9dcc396ff9acea9e606315456a6d94052\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60\"" Sep 5 06:02:58.207255 containerd[1540]: time="2025-09-05T06:02:58.207232504Z" level=info msg="StartContainer for \"9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60\"" Sep 5 06:02:58.208413 containerd[1540]: time="2025-09-05T06:02:58.208390305Z" level=info msg="connecting to shim 9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60" address="unix:///run/containerd/s/c3089ff5809303eff545e26fe22ab50a7b479439f661524afe9ef24692795267" protocol=ttrpc version=3 Sep 5 06:02:58.228335 systemd[1]: Started cri-containerd-9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60.scope - libcontainer container 9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60. Sep 5 06:02:58.258582 containerd[1540]: time="2025-09-05T06:02:58.258477734Z" level=info msg="StartContainer for \"9debd611b8940e87ad0f48b55a4f8c9f5e54f7e815c395d92b02a67931bccb60\" returns successfully" Sep 5 06:02:58.695414 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:54950.service - OpenSSH per-connection server daemon (10.0.0.1:54950). Sep 5 06:02:58.767699 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 54950 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:02:58.769146 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:02:58.772841 systemd-logind[1516]: New session 9 of user core. Sep 5 06:02:58.782453 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 06:02:59.044790 sshd[5088]: Connection closed by 10.0.0.1 port 54950 Sep 5 06:02:59.049183 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Sep 5 06:02:59.059994 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:54950.service: Deactivated successfully. Sep 5 06:02:59.063720 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 06:02:59.066735 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Sep 5 06:02:59.069365 systemd-logind[1516]: Removed session 9. Sep 5 06:02:59.120514 kubelet[2667]: I0905 06:02:59.120458 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55c67bb578-ft6sk" podStartSLOduration=27.276475496 podStartE2EDuration="30.120440432s" podCreationTimestamp="2025-09-05 06:02:29 +0000 UTC" firstStartedPulling="2025-09-05 06:02:55.346580758 +0000 UTC m=+41.547584555" lastFinishedPulling="2025-09-05 06:02:58.190545694 +0000 UTC m=+44.391549491" observedRunningTime="2025-09-05 06:02:59.119506671 +0000 UTC m=+45.320510468" watchObservedRunningTime="2025-09-05 06:02:59.120440432 +0000 UTC m=+45.321444229" Sep 5 06:02:59.506788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963567972.mount: Deactivated successfully. Sep 5 06:02:59.833611 containerd[1540]: time="2025-09-05T06:02:59.833122901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:59.833957 containerd[1540]: time="2025-09-05T06:02:59.833833701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 06:02:59.834820 containerd[1540]: time="2025-09-05T06:02:59.834744742Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:59.836975 containerd[1540]: time="2025-09-05T06:02:59.836918463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:02:59.837621 containerd[1540]: time="2025-09-05T06:02:59.837585463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.646614968s" Sep 5 06:02:59.837621 containerd[1540]: time="2025-09-05T06:02:59.837619063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 06:02:59.838793 containerd[1540]: time="2025-09-05T06:02:59.838588304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:02:59.841952 containerd[1540]: time="2025-09-05T06:02:59.841901065Z" level=info msg="CreateContainer within sandbox \"6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 06:02:59.853359 containerd[1540]: time="2025-09-05T06:02:59.853304672Z" level=info msg="Container 1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:02:59.860100 containerd[1540]: time="2025-09-05T06:02:59.860044995Z" level=info msg="CreateContainer within sandbox \"6f457b577c463d4e23fc733a97863a82e42b3d3d0289c9e81c72ea949497ac71\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f\"" Sep 5 06:02:59.860629 containerd[1540]: time="2025-09-05T06:02:59.860597196Z" level=info msg="StartContainer for \"1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f\"" Sep 5 06:02:59.863359 containerd[1540]: time="2025-09-05T06:02:59.863322157Z" level=info msg="connecting to shim 1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f" address="unix:///run/containerd/s/8d1e9501881efbe2919a6f699723fa65641781c19e53e49c92e5aa26d55f5b5b" protocol=ttrpc version=3 Sep 5 06:02:59.886381 systemd[1]: Started cri-containerd-1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f.scope - libcontainer container 1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f. Sep 5 06:02:59.933154 containerd[1540]: time="2025-09-05T06:02:59.933106435Z" level=info msg="StartContainer for \"1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f\" returns successfully" Sep 5 06:03:00.109341 kubelet[2667]: I0905 06:03:00.108534 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:03:00.126022 kubelet[2667]: I0905 06:03:00.125956 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-krsv8" podStartSLOduration=22.506845677 podStartE2EDuration="26.125764456s" podCreationTimestamp="2025-09-05 06:02:34 +0000 UTC" firstStartedPulling="2025-09-05 06:02:56.219504365 +0000 UTC m=+42.420508162" lastFinishedPulling="2025-09-05 06:02:59.838423144 +0000 UTC m=+46.039426941" observedRunningTime="2025-09-05 06:03:00.125509816 +0000 UTC m=+46.326513693" watchObservedRunningTime="2025-09-05 06:03:00.125764456 +0000 UTC m=+46.326768253" Sep 5 06:03:00.157056 containerd[1540]: time="2025-09-05T06:03:00.156545512Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:03:00.157209 containerd[1540]: time="2025-09-05T06:03:00.157135592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 06:03:00.159325 containerd[1540]: time="2025-09-05T06:03:00.159287033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 320.664929ms" Sep 5 06:03:00.159325 containerd[1540]: time="2025-09-05T06:03:00.159327193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 06:03:00.162438 containerd[1540]: time="2025-09-05T06:03:00.162384635Z" level=info msg="CreateContainer within sandbox \"77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:03:00.171751 containerd[1540]: time="2025-09-05T06:03:00.171692920Z" level=info msg="Container 7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:03:00.179925 containerd[1540]: time="2025-09-05T06:03:00.179883564Z" level=info msg="CreateContainer within sandbox \"77a4c1a5cb09d584fb22149b17053af947093fc32775eafa40d2b71a6684ca70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742\"" Sep 5 06:03:00.181232 containerd[1540]: time="2025-09-05T06:03:00.180724484Z" level=info msg="StartContainer for \"7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742\"" Sep 5 06:03:00.182034 containerd[1540]: time="2025-09-05T06:03:00.181998925Z" level=info msg="connecting to shim 7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742" address="unix:///run/containerd/s/8b8d414026880c854c3c001a45565d6114085e7ce06ea05629ee54aebc481729" protocol=ttrpc version=3 Sep 5 06:03:00.209637 systemd[1]: Started cri-containerd-7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742.scope - libcontainer container 7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742. Sep 5 06:03:00.275160 containerd[1540]: time="2025-09-05T06:03:00.275123053Z" level=info msg="StartContainer for \"7f7a49f2958f8a1ca40ebcf4ab440619ac4cb63771c304dd4bfa2785590e5742\" returns successfully" Sep 5 06:03:01.114851 kubelet[2667]: I0905 06:03:01.114815 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:03:02.116871 kubelet[2667]: I0905 06:03:02.116805 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:03:02.955430 kubelet[2667]: I0905 06:03:02.955324 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:03:03.041868 containerd[1540]: time="2025-09-05T06:03:03.041818611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f\" id:\"80a6d5db67a38d070720c2a0739b7b11a2699d9de75a2904f361a07db5689e1a\" pid:5205 exited_at:{seconds:1757052183 nanos:38667250}" Sep 5 06:03:03.060507 kubelet[2667]: I0905 06:03:03.059953 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55c67bb578-k765k" podStartSLOduration=30.18031055 podStartE2EDuration="34.059931539s" podCreationTimestamp="2025-09-05 06:02:29 +0000 UTC" firstStartedPulling="2025-09-05 06:02:56.280619285 +0000 UTC m=+42.481623082" lastFinishedPulling="2025-09-05 06:03:00.160240274 +0000 UTC m=+46.361244071" observedRunningTime="2025-09-05 06:03:01.130274886 +0000 UTC m=+47.331278683" watchObservedRunningTime="2025-09-05 06:03:03.059931539 +0000 UTC m=+49.260935336" Sep 5 06:03:03.126481 containerd[1540]: time="2025-09-05T06:03:03.126435887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f\" id:\"afdbe24d8f7b0ac88f93ee51bbd7fa13e8f95f396cbc69c6aac73046a3545d5b\" pid:5228 exited_at:{seconds:1757052183 nanos:125939647}" Sep 5 06:03:04.057833 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:48272.service - OpenSSH per-connection server daemon (10.0.0.1:48272). Sep 5 06:03:04.125730 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 48272 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:04.127328 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:04.131185 systemd-logind[1516]: New session 10 of user core. Sep 5 06:03:04.140396 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 06:03:04.305307 sshd[5250]: Connection closed by 10.0.0.1 port 48272 Sep 5 06:03:04.305850 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:04.314517 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:48272.service: Deactivated successfully. Sep 5 06:03:04.316419 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 06:03:04.318281 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Sep 5 06:03:04.320132 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:48288.service - OpenSSH per-connection server daemon (10.0.0.1:48288). Sep 5 06:03:04.323338 systemd-logind[1516]: Removed session 10. Sep 5 06:03:04.383505 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 48288 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:04.385169 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:04.388980 systemd-logind[1516]: New session 11 of user core. Sep 5 06:03:04.401363 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 06:03:04.608244 sshd[5267]: Connection closed by 10.0.0.1 port 48288 Sep 5 06:03:04.607846 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:04.620470 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:48288.service: Deactivated successfully. Sep 5 06:03:04.623697 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 06:03:04.625655 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Sep 5 06:03:04.627293 systemd-logind[1516]: Removed session 11. Sep 5 06:03:04.629479 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:48302.service - OpenSSH per-connection server daemon (10.0.0.1:48302). Sep 5 06:03:04.686146 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 48302 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:04.687114 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:04.692515 systemd-logind[1516]: New session 12 of user core. Sep 5 06:03:04.697354 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 06:03:04.881795 sshd[5284]: Connection closed by 10.0.0.1 port 48302 Sep 5 06:03:04.882225 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:04.886115 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:48302.service: Deactivated successfully. Sep 5 06:03:04.889089 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 06:03:04.890386 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Sep 5 06:03:04.891686 systemd-logind[1516]: Removed session 12. Sep 5 06:03:09.894363 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:48310.service - OpenSSH per-connection server daemon (10.0.0.1:48310). Sep 5 06:03:09.950495 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:09.951836 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:09.955954 systemd-logind[1516]: New session 13 of user core. Sep 5 06:03:09.966380 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 06:03:10.142106 sshd[5312]: Connection closed by 10.0.0.1 port 48310 Sep 5 06:03:10.143642 sshd-session[5309]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:10.149475 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:48310.service: Deactivated successfully. Sep 5 06:03:10.151601 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 06:03:10.153257 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Sep 5 06:03:10.155845 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:52928.service - OpenSSH per-connection server daemon (10.0.0.1:52928). Sep 5 06:03:10.156900 systemd-logind[1516]: Removed session 13. Sep 5 06:03:10.215779 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 52928 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:10.216965 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:10.221255 systemd-logind[1516]: New session 14 of user core. Sep 5 06:03:10.227354 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 06:03:10.320989 kubelet[2667]: I0905 06:03:10.320942 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:03:10.443918 sshd[5328]: Connection closed by 10.0.0.1 port 52928 Sep 5 06:03:10.444242 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:10.456273 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:52928.service: Deactivated successfully. Sep 5 06:03:10.457792 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 06:03:10.458543 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Sep 5 06:03:10.461080 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:52944.service - OpenSSH per-connection server daemon (10.0.0.1:52944). Sep 5 06:03:10.461771 systemd-logind[1516]: Removed session 14. Sep 5 06:03:10.518922 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 52944 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:10.519973 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:10.524436 systemd-logind[1516]: New session 15 of user core. Sep 5 06:03:10.535353 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 06:03:12.103228 sshd[5344]: Connection closed by 10.0.0.1 port 52944 Sep 5 06:03:12.104148 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:12.119473 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:52944.service: Deactivated successfully. Sep 5 06:03:12.121301 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 06:03:12.121515 systemd[1]: session-15.scope: Consumed 535ms CPU time, 70.7M memory peak. Sep 5 06:03:12.123623 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Sep 5 06:03:12.127170 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:52960.service - OpenSSH per-connection server daemon (10.0.0.1:52960). Sep 5 06:03:12.129870 systemd-logind[1516]: Removed session 15. Sep 5 06:03:12.198742 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 52960 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:12.200095 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:12.204147 systemd-logind[1516]: New session 16 of user core. Sep 5 06:03:12.210336 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 06:03:12.515004 sshd[5367]: Connection closed by 10.0.0.1 port 52960 Sep 5 06:03:12.515406 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:12.527218 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:52960.service: Deactivated successfully. Sep 5 06:03:12.530773 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 06:03:12.532874 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Sep 5 06:03:12.536340 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:52972.service - OpenSSH per-connection server daemon (10.0.0.1:52972). Sep 5 06:03:12.537575 systemd-logind[1516]: Removed session 16. Sep 5 06:03:12.588851 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 52972 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:12.590317 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:12.595636 systemd-logind[1516]: New session 17 of user core. Sep 5 06:03:12.602340 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 06:03:12.750137 sshd[5381]: Connection closed by 10.0.0.1 port 52972 Sep 5 06:03:12.750496 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:12.753864 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:52972.service: Deactivated successfully. Sep 5 06:03:12.755565 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 06:03:12.756262 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Sep 5 06:03:12.757262 systemd-logind[1516]: Removed session 17. Sep 5 06:03:17.302756 containerd[1540]: time="2025-09-05T06:03:17.302657500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cb166a536cf9530d120cb881f3f3e4df38206a10ca5e8b26e7469ecb8e0169f\" id:\"4854fe39172dfacb16d72322680c1b1b8ab4e794fa73a20c8223beaed1fb1f51\" pid:5410 exited_at:{seconds:1757052197 nanos:302244259}" Sep 5 06:03:17.761578 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Sep 5 06:03:17.811484 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:17.812622 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:17.816588 systemd-logind[1516]: New session 18 of user core. Sep 5 06:03:17.827350 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 06:03:17.980275 sshd[5425]: Connection closed by 10.0.0.1 port 52982 Sep 5 06:03:17.980367 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:17.985134 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:52982.service: Deactivated successfully. Sep 5 06:03:17.986933 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 06:03:17.989697 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Sep 5 06:03:17.991705 systemd-logind[1516]: Removed session 18. Sep 5 06:03:18.021728 containerd[1540]: time="2025-09-05T06:03:18.021621062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"983d33e0e3bd671f1c95c137cd88ab7c68e96dd04599218e1e7593471aec5487\" id:\"f801499614e4cd3fc56196ebc4519507784f1e41e83d16aca10f0851a1bf8c20\" pid:5448 exited_at:{seconds:1757052198 nanos:21352342}" Sep 5 06:03:19.917961 containerd[1540]: time="2025-09-05T06:03:19.917923117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\" id:\"12a832a1f9c11f1639d0269d1ff0e7fcbc87e155cfc2b44c6b81588a432e61e4\" pid:5478 exited_at:{seconds:1757052199 nanos:917732757}" Sep 5 06:03:22.994535 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:48222.service - OpenSSH per-connection server daemon (10.0.0.1:48222). Sep 5 06:03:23.053998 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 48222 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:23.055218 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:23.063240 systemd-logind[1516]: New session 19 of user core. Sep 5 06:03:23.071428 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 06:03:23.198510 sshd[5495]: Connection closed by 10.0.0.1 port 48222 Sep 5 06:03:23.197853 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:23.201876 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:48222.service: Deactivated successfully. Sep 5 06:03:23.203712 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 06:03:23.205917 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Sep 5 06:03:23.207086 systemd-logind[1516]: Removed session 19. Sep 5 06:03:26.718492 containerd[1540]: time="2025-09-05T06:03:26.718448566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efae8c726164192203a5a12797d38f9b3fcbbcbc6f2945ca9804cf098b163336\" id:\"a472a6824577cf708291165ccad1a5d41f3d33d66f61746d5c57d86a4d6d3c41\" pid:5519 exited_at:{seconds:1757052206 nanos:718142283}" Sep 5 06:03:28.213542 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:48228.service - OpenSSH per-connection server daemon (10.0.0.1:48228). Sep 5 06:03:28.271674 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:03:28.272949 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:03:28.277040 systemd-logind[1516]: New session 20 of user core. Sep 5 06:03:28.283349 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 06:03:28.401797 sshd[5539]: Connection closed by 10.0.0.1 port 48228 Sep 5 06:03:28.402426 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Sep 5 06:03:28.406030 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:48228.service: Deactivated successfully. Sep 5 06:03:28.407668 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 06:03:28.408569 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Sep 5 06:03:28.409668 systemd-logind[1516]: Removed session 20.