Sep 5 00:02:19.740105 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 00:02:19.740126 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Sep 4 22:21:34 -00 2025 Sep 5 00:02:19.740136 kernel: KASLR enabled Sep 5 00:02:19.740142 kernel: efi: EFI v2.7 by EDK II Sep 5 00:02:19.740147 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 5 00:02:19.740152 kernel: random: crng init done Sep 5 00:02:19.740159 kernel: secureboot: Secure boot disabled Sep 5 00:02:19.740165 kernel: ACPI: Early table checksum verification disabled Sep 5 00:02:19.740171 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 5 00:02:19.740177 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:02:19.740183 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740189 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740195 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740201 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740207 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740215 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740221 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740227 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740233 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:02:19.740239 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 00:02:19.740245 kernel: ACPI: Use ACPI SPCR as default console: No Sep 5 00:02:19.740251 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:02:19.740257 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 5 00:02:19.740262 kernel: Zone ranges: Sep 5 00:02:19.740268 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:02:19.740276 kernel: DMA32 empty Sep 5 00:02:19.740293 kernel: Normal empty Sep 5 00:02:19.740299 kernel: Device empty Sep 5 00:02:19.740305 kernel: Movable zone start for each node Sep 5 00:02:19.740311 kernel: Early memory node ranges Sep 5 00:02:19.740317 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 5 00:02:19.740323 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 5 00:02:19.740329 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 5 00:02:19.740335 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 5 00:02:19.740341 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 5 00:02:19.740347 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 5 00:02:19.740353 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 5 00:02:19.740361 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 5 00:02:19.740367 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 5 00:02:19.740373 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 00:02:19.740382 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 00:02:19.740389 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 00:02:19.740395 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 00:02:19.740403 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:02:19.740409 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 00:02:19.740416 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 5 00:02:19.740422 kernel: psci: probing for conduit method from ACPI. Sep 5 00:02:19.740429 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 00:02:19.740435 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 00:02:19.740466 kernel: psci: Trusted OS migration not required Sep 5 00:02:19.740473 kernel: psci: SMC Calling Convention v1.1 Sep 5 00:02:19.740479 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 00:02:19.740485 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 5 00:02:19.740494 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 5 00:02:19.740500 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 00:02:19.740506 kernel: Detected PIPT I-cache on CPU0 Sep 5 00:02:19.740513 kernel: CPU features: detected: GIC system register CPU interface Sep 5 00:02:19.740519 kernel: CPU features: detected: Spectre-v4 Sep 5 00:02:19.740526 kernel: CPU features: detected: Spectre-BHB Sep 5 00:02:19.740532 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 00:02:19.740538 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 00:02:19.740545 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 00:02:19.740551 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 00:02:19.740557 kernel: alternatives: applying boot alternatives Sep 5 00:02:19.740564 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=936dbc4ea592050e15794e1e6e7f70cd7cba0dbef72270410b4bbc6a29324de7 Sep 5 00:02:19.740573 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:02:19.740579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:02:19.740585 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:02:19.740592 kernel: Fallback order for Node 0: 0 Sep 5 00:02:19.740598 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 5 00:02:19.740613 kernel: Policy zone: DMA Sep 5 00:02:19.740619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:02:19.740625 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 5 00:02:19.740632 kernel: software IO TLB: area num 4. Sep 5 00:02:19.740638 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 5 00:02:19.740644 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 5 00:02:19.740653 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:02:19.740659 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:02:19.740666 kernel: rcu: RCU event tracing is enabled. Sep 5 00:02:19.740673 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:02:19.740680 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:02:19.740686 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:02:19.740693 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:02:19.740699 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:02:19.740705 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:02:19.740712 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:02:19.740718 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 00:02:19.740726 kernel: GICv3: 256 SPIs implemented Sep 5 00:02:19.740732 kernel: GICv3: 0 Extended SPIs implemented Sep 5 00:02:19.740738 kernel: Root IRQ handler: gic_handle_irq Sep 5 00:02:19.740745 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 00:02:19.740751 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 5 00:02:19.740757 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 00:02:19.740764 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 00:02:19.740770 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 5 00:02:19.740777 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 5 00:02:19.740784 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 5 00:02:19.740790 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 5 00:02:19.740796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:02:19.740804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:02:19.740810 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 00:02:19.740817 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 00:02:19.740823 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 00:02:19.740830 kernel: arm-pv: using stolen time PV Sep 5 00:02:19.740837 kernel: Console: colour dummy device 80x25 Sep 5 00:02:19.740843 kernel: ACPI: Core revision 20240827 Sep 5 00:02:19.740850 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 00:02:19.740857 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:02:19.740863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:02:19.740871 kernel: landlock: Up and running. Sep 5 00:02:19.740877 kernel: SELinux: Initializing. Sep 5 00:02:19.740884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:02:19.740890 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:02:19.740897 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:02:19.740903 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:02:19.740910 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:02:19.740916 kernel: Remapping and enabling EFI services. Sep 5 00:02:19.740923 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:02:19.740935 kernel: Detected PIPT I-cache on CPU1 Sep 5 00:02:19.740942 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 00:02:19.740949 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 5 00:02:19.740957 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:02:19.740964 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 00:02:19.740971 kernel: Detected PIPT I-cache on CPU2 Sep 5 00:02:19.740978 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 00:02:19.740985 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 5 00:02:19.740993 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:02:19.741000 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 00:02:19.741007 kernel: Detected PIPT I-cache on CPU3 Sep 5 00:02:19.741014 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 00:02:19.741021 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 5 00:02:19.741027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:02:19.741034 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 00:02:19.741041 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:02:19.741048 kernel: SMP: Total of 4 processors activated. Sep 5 00:02:19.741056 kernel: CPU: All CPU(s) started at EL1 Sep 5 00:02:19.741063 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 00:02:19.741070 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 00:02:19.741077 kernel: CPU features: detected: Common not Private translations Sep 5 00:02:19.741083 kernel: CPU features: detected: CRC32 instructions Sep 5 00:02:19.741090 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 00:02:19.741097 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 00:02:19.741104 kernel: CPU features: detected: LSE atomic instructions Sep 5 00:02:19.741111 kernel: CPU features: detected: Privileged Access Never Sep 5 00:02:19.741119 kernel: CPU features: detected: RAS Extension Support Sep 5 00:02:19.741126 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 00:02:19.741133 kernel: alternatives: applying system-wide alternatives Sep 5 00:02:19.741139 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 5 00:02:19.741147 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 5 00:02:19.741153 kernel: devtmpfs: initialized Sep 5 00:02:19.741160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:02:19.741167 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:02:19.741174 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 00:02:19.741182 kernel: 0 pages in range for non-PLT usage Sep 5 00:02:19.741189 kernel: 508560 pages in range for PLT usage Sep 5 00:02:19.741196 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:02:19.741203 kernel: SMBIOS 3.0.0 present. Sep 5 00:02:19.741209 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 5 00:02:19.741216 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:02:19.741223 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:02:19.741230 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 00:02:19.741237 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 00:02:19.741245 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 00:02:19.741252 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:02:19.741259 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 5 00:02:19.741266 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:02:19.741272 kernel: cpuidle: using governor menu Sep 5 00:02:19.741279 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 00:02:19.741286 kernel: ASID allocator initialised with 32768 entries Sep 5 00:02:19.741293 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:02:19.741300 kernel: Serial: AMBA PL011 UART driver Sep 5 00:02:19.741308 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:02:19.741314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:02:19.741321 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 00:02:19.741328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 00:02:19.741335 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:02:19.741342 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:02:19.741349 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 00:02:19.741355 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 00:02:19.741362 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:02:19.741370 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:02:19.741377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:02:19.741384 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:02:19.741391 kernel: ACPI: Interpreter enabled Sep 5 00:02:19.741397 kernel: ACPI: Using GIC for interrupt routing Sep 5 00:02:19.741404 kernel: ACPI: MCFG table detected, 1 entries Sep 5 00:02:19.741411 kernel: ACPI: CPU0 has been hot-added Sep 5 00:02:19.741418 kernel: ACPI: CPU1 has been hot-added Sep 5 00:02:19.741425 kernel: ACPI: CPU2 has been hot-added Sep 5 00:02:19.741431 kernel: ACPI: CPU3 has been hot-added Sep 5 00:02:19.741452 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 00:02:19.741460 kernel: printk: legacy console [ttyAMA0] enabled Sep 5 00:02:19.741466 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:02:19.741610 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:02:19.741678 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 00:02:19.741736 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 00:02:19.741793 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 00:02:19.741853 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 00:02:19.741862 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 00:02:19.741869 kernel: PCI host bridge to bus 0000:00 Sep 5 00:02:19.741935 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 00:02:19.741988 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 00:02:19.742040 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 00:02:19.742091 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:02:19.742168 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:02:19.742237 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:02:19.742297 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 5 00:02:19.742356 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 5 00:02:19.742416 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:02:19.742542 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 5 00:02:19.742612 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 5 00:02:19.742678 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 5 00:02:19.742733 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 00:02:19.742785 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 00:02:19.742837 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 00:02:19.742846 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 00:02:19.742853 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 00:02:19.742860 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 00:02:19.742869 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 00:02:19.742876 kernel: iommu: Default domain type: Translated Sep 5 00:02:19.742882 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 00:02:19.742889 kernel: efivars: Registered efivars operations Sep 5 00:02:19.742896 kernel: vgaarb: loaded Sep 5 00:02:19.742903 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 00:02:19.742910 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:02:19.742916 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:02:19.742923 kernel: pnp: PnP ACPI init Sep 5 00:02:19.742994 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 00:02:19.743004 kernel: pnp: PnP ACPI: found 1 devices Sep 5 00:02:19.743011 kernel: NET: Registered PF_INET protocol family Sep 5 00:02:19.743018 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:02:19.743025 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:02:19.743032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:02:19.743039 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:02:19.743046 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:02:19.743054 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:02:19.743061 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:02:19.743068 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:02:19.743075 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:02:19.743082 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:02:19.743089 kernel: kvm [1]: HYP mode not available Sep 5 00:02:19.743095 kernel: Initialise system trusted keyrings Sep 5 00:02:19.743102 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:02:19.743109 kernel: Key type asymmetric registered Sep 5 00:02:19.743117 kernel: Asymmetric key parser 'x509' registered Sep 5 00:02:19.743124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 5 00:02:19.743131 kernel: io scheduler mq-deadline registered Sep 5 00:02:19.743138 kernel: io scheduler kyber registered Sep 5 00:02:19.743145 kernel: io scheduler bfq registered Sep 5 00:02:19.743152 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 00:02:19.743159 kernel: ACPI: button: Power Button [PWRB] Sep 5 00:02:19.743166 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 00:02:19.743224 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 00:02:19.743234 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:02:19.743241 kernel: thunder_xcv, ver 1.0 Sep 5 00:02:19.743248 kernel: thunder_bgx, ver 1.0 Sep 5 00:02:19.743255 kernel: nicpf, ver 1.0 Sep 5 00:02:19.743261 kernel: nicvf, ver 1.0 Sep 5 00:02:19.743326 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 00:02:19.743382 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T00:02:19 UTC (1757030539) Sep 5 00:02:19.743391 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:02:19.743398 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 5 00:02:19.743406 kernel: watchdog: NMI not fully supported Sep 5 00:02:19.743413 kernel: watchdog: Hard watchdog permanently disabled Sep 5 00:02:19.743420 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:02:19.743428 kernel: Segment Routing with IPv6 Sep 5 00:02:19.743434 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:02:19.743451 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:02:19.743458 kernel: Key type dns_resolver registered Sep 5 00:02:19.743465 kernel: registered taskstats version 1 Sep 5 00:02:19.743472 kernel: Loading compiled-in X.509 certificates Sep 5 00:02:19.743481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 076c0e39153760a09e2827c98096964655099fd6' Sep 5 00:02:19.743487 kernel: Demotion targets for Node 0: null Sep 5 00:02:19.743494 kernel: Key type .fscrypt registered Sep 5 00:02:19.743501 kernel: Key type fscrypt-provisioning registered Sep 5 00:02:19.743508 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:02:19.743514 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:02:19.743521 kernel: ima: No architecture policies found Sep 5 00:02:19.743528 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 00:02:19.743536 kernel: clk: Disabling unused clocks Sep 5 00:02:19.743543 kernel: PM: genpd: Disabling unused power domains Sep 5 00:02:19.743550 kernel: Warning: unable to open an initial console. Sep 5 00:02:19.743557 kernel: Freeing unused kernel memory: 38976K Sep 5 00:02:19.743564 kernel: Run /init as init process Sep 5 00:02:19.743571 kernel: with arguments: Sep 5 00:02:19.743578 kernel: /init Sep 5 00:02:19.743584 kernel: with environment: Sep 5 00:02:19.743591 kernel: HOME=/ Sep 5 00:02:19.743597 kernel: TERM=linux Sep 5 00:02:19.743612 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:02:19.743620 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:02:19.743630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:02:19.743638 systemd[1]: Detected virtualization kvm. Sep 5 00:02:19.743645 systemd[1]: Detected architecture arm64. Sep 5 00:02:19.743652 systemd[1]: Running in initrd. Sep 5 00:02:19.743659 systemd[1]: No hostname configured, using default hostname. Sep 5 00:02:19.743669 systemd[1]: Hostname set to . Sep 5 00:02:19.743676 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:02:19.743684 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:02:19.743691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:02:19.743698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:02:19.743706 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:02:19.743714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:02:19.743721 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:02:19.743731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:02:19.743739 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:02:19.743747 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:02:19.743754 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:02:19.743761 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:02:19.743769 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:02:19.743776 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:02:19.743785 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:02:19.743792 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:02:19.743800 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:02:19.743807 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:02:19.743814 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:02:19.743822 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:02:19.743829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:02:19.743837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:02:19.743845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:02:19.743853 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:02:19.743861 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:02:19.743869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:02:19.743877 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:02:19.743885 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:02:19.743893 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:02:19.743901 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:02:19.743908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:02:19.743917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:02:19.743925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:02:19.743933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:02:19.743940 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:02:19.743949 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:02:19.743974 systemd-journald[244]: Collecting audit messages is disabled. Sep 5 00:02:19.743993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:02:19.744001 systemd-journald[244]: Journal started Sep 5 00:02:19.744020 systemd-journald[244]: Runtime Journal (/run/log/journal/a8d8517e2c33438e84ad57ecac658299) is 6M, max 48.5M, 42.4M free. Sep 5 00:02:19.735142 systemd-modules-load[245]: Inserted module 'overlay' Sep 5 00:02:19.746755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:02:19.750044 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:02:19.750075 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:02:19.751133 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 5 00:02:19.751975 kernel: Bridge firewalling registered Sep 5 00:02:19.752097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:02:19.753161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:02:19.756037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:02:19.759509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:02:19.762875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:02:19.767014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:02:19.768677 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:02:19.772276 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:02:19.773464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:02:19.774764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:02:19.776623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:02:19.780466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:02:19.783464 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=936dbc4ea592050e15794e1e6e7f70cd7cba0dbef72270410b4bbc6a29324de7 Sep 5 00:02:19.820068 systemd-resolved[296]: Positive Trust Anchors: Sep 5 00:02:19.820088 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:02:19.820119 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:02:19.824906 systemd-resolved[296]: Defaulting to hostname 'linux'. Sep 5 00:02:19.826315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:02:19.828094 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:02:19.852467 kernel: SCSI subsystem initialized Sep 5 00:02:19.856464 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:02:19.864474 kernel: iscsi: registered transport (tcp) Sep 5 00:02:19.876463 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:02:19.876485 kernel: QLogic iSCSI HBA Driver Sep 5 00:02:19.892015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:02:19.918467 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:02:19.920242 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:02:19.961943 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:02:19.964019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:02:20.020477 kernel: raid6: neonx8 gen() 15764 MB/s Sep 5 00:02:20.037463 kernel: raid6: neonx4 gen() 15810 MB/s Sep 5 00:02:20.054459 kernel: raid6: neonx2 gen() 13229 MB/s Sep 5 00:02:20.071460 kernel: raid6: neonx1 gen() 10420 MB/s Sep 5 00:02:20.088474 kernel: raid6: int64x8 gen() 6897 MB/s Sep 5 00:02:20.105476 kernel: raid6: int64x4 gen() 7337 MB/s Sep 5 00:02:20.122460 kernel: raid6: int64x2 gen() 6093 MB/s Sep 5 00:02:20.139460 kernel: raid6: int64x1 gen() 5049 MB/s Sep 5 00:02:20.139489 kernel: raid6: using algorithm neonx4 gen() 15810 MB/s Sep 5 00:02:20.156471 kernel: raid6: .... xor() 12330 MB/s, rmw enabled Sep 5 00:02:20.156487 kernel: raid6: using neon recovery algorithm Sep 5 00:02:20.161485 kernel: xor: measuring software checksum speed Sep 5 00:02:20.161526 kernel: 8regs : 21613 MB/sec Sep 5 00:02:20.162542 kernel: 32regs : 21156 MB/sec Sep 5 00:02:20.162555 kernel: arm64_neon : 28147 MB/sec Sep 5 00:02:20.162564 kernel: xor: using function: arm64_neon (28147 MB/sec) Sep 5 00:02:20.214472 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:02:20.220802 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:02:20.224111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:02:20.253400 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 5 00:02:20.257606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:02:20.259587 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:02:20.284961 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 5 00:02:20.306399 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:02:20.308199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:02:20.359481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:02:20.362226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:02:20.407454 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 00:02:20.413582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:02:20.414409 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:02:20.413707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:02:20.416128 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:02:20.417961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:02:20.422854 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:02:20.422875 kernel: GPT:9289727 != 19775487 Sep 5 00:02:20.422885 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:02:20.422894 kernel: GPT:9289727 != 19775487 Sep 5 00:02:20.422904 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:02:20.422912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:02:20.465543 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:02:20.466681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:02:20.469508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:02:20.482312 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:02:20.489634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:02:20.495432 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:02:20.496352 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:02:20.498785 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:02:20.500431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:02:20.502311 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:02:20.504935 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:02:20.506565 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:02:20.526340 disk-uuid[590]: Primary Header is updated. Sep 5 00:02:20.526340 disk-uuid[590]: Secondary Entries is updated. Sep 5 00:02:20.526340 disk-uuid[590]: Secondary Header is updated. Sep 5 00:02:20.530464 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:02:20.530929 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:02:20.535453 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:02:21.536460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:02:21.537725 disk-uuid[594]: The operation has completed successfully. Sep 5 00:02:21.564055 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:02:21.564172 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:02:21.587169 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:02:21.610286 sh[611]: Success Sep 5 00:02:21.622822 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:02:21.622864 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:02:21.622875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:02:21.629467 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 5 00:02:21.653788 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:02:21.656154 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:02:21.671491 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:02:21.675959 kernel: BTRFS: device fsid 7cf88bee-c029-4534-8152-24a8f9f8db3f devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (623) Sep 5 00:02:21.675985 kernel: BTRFS info (device dm-0): first mount of filesystem 7cf88bee-c029-4534-8152-24a8f9f8db3f Sep 5 00:02:21.675996 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:02:21.680471 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:02:21.680495 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:02:21.681164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:02:21.682213 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:02:21.683385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:02:21.684079 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:02:21.685404 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:02:21.705929 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 5 00:02:21.705969 kernel: BTRFS info (device vda6): first mount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 5 00:02:21.705979 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:02:21.709918 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:02:21.709957 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:02:21.714466 kernel: BTRFS info (device vda6): last unmount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 5 00:02:21.714505 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:02:21.716472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:02:21.777516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:02:21.780558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:02:21.812956 ignition[697]: Ignition 2.21.0 Sep 5 00:02:21.812973 ignition[697]: Stage: fetch-offline Sep 5 00:02:21.813005 ignition[697]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:02:21.813013 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:02:21.813177 ignition[697]: parsed url from cmdline: "" Sep 5 00:02:21.813180 ignition[697]: no config URL provided Sep 5 00:02:21.813184 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:02:21.813190 ignition[697]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:02:21.813211 ignition[697]: op(1): [started] loading QEMU firmware config module Sep 5 00:02:21.813215 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:02:21.819524 ignition[697]: op(1): [finished] loading QEMU firmware config module Sep 5 00:02:21.826830 systemd-networkd[801]: lo: Link UP Sep 5 00:02:21.826842 systemd-networkd[801]: lo: Gained carrier Sep 5 00:02:21.827507 systemd-networkd[801]: Enumeration completed Sep 5 00:02:21.827884 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:02:21.827888 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:02:21.828516 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:02:21.828787 systemd-networkd[801]: eth0: Link UP Sep 5 00:02:21.828873 systemd-networkd[801]: eth0: Gained carrier Sep 5 00:02:21.828882 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:02:21.829499 systemd[1]: Reached target network.target - Network. Sep 5 00:02:21.858490 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:02:21.872789 ignition[697]: parsing config with SHA512: 8f16be77e728b10a255a6f47e5671b749778b33e960fa57c356354cb896bf135f0fbc958608c7905503f68ec421dabf4c76b8830c4329f4600a156ec00025b0f Sep 5 00:02:21.876828 unknown[697]: fetched base config from "system" Sep 5 00:02:21.876841 unknown[697]: fetched user config from "qemu" Sep 5 00:02:21.877193 ignition[697]: fetch-offline: fetch-offline passed Sep 5 00:02:21.879140 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:02:21.877256 ignition[697]: Ignition finished successfully Sep 5 00:02:21.880273 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:02:21.881127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:02:21.912892 ignition[810]: Ignition 2.21.0 Sep 5 00:02:21.912907 ignition[810]: Stage: kargs Sep 5 00:02:21.913047 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:02:21.913056 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:02:21.914564 ignition[810]: kargs: kargs passed Sep 5 00:02:21.914626 ignition[810]: Ignition finished successfully Sep 5 00:02:21.917433 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:02:21.919215 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:02:21.950071 ignition[818]: Ignition 2.21.0 Sep 5 00:02:21.950087 ignition[818]: Stage: disks Sep 5 00:02:21.950212 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:02:21.950221 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:02:21.953055 ignition[818]: disks: disks passed Sep 5 00:02:21.953118 ignition[818]: Ignition finished successfully Sep 5 00:02:21.954686 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:02:21.955714 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:02:21.957152 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:02:21.958892 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:02:21.960507 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:02:21.961982 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:02:21.964243 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:02:21.990857 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:02:21.994876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:02:21.996889 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:02:22.063453 kernel: EXT4-fs (vda9): mounted filesystem c1aea666-7bbc-4a3b-a66d-c37ebbad8baa r/w with ordered data mode. Quota mode: none. Sep 5 00:02:22.064328 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:02:22.065468 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:02:22.068087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:02:22.069985 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:02:22.070804 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:02:22.070840 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:02:22.070861 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:02:22.077627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:02:22.080584 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:02:22.083455 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Sep 5 00:02:22.085263 kernel: BTRFS info (device vda6): first mount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 5 00:02:22.085293 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:02:22.087710 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:02:22.087744 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:02:22.088850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:02:22.116034 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:02:22.120034 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:02:22.123631 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:02:22.126996 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:02:22.189514 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:02:22.191564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:02:22.192969 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:02:22.211453 kernel: BTRFS info (device vda6): last unmount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 5 00:02:22.222594 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:02:22.233943 ignition[950]: INFO : Ignition 2.21.0 Sep 5 00:02:22.233943 ignition[950]: INFO : Stage: mount Sep 5 00:02:22.235239 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:02:22.235239 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:02:22.236925 ignition[950]: INFO : mount: mount passed Sep 5 00:02:22.236925 ignition[950]: INFO : Ignition finished successfully Sep 5 00:02:22.237842 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:02:22.239876 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:02:22.803724 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:02:22.805408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:02:22.822458 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Sep 5 00:02:22.822493 kernel: BTRFS info (device vda6): first mount of filesystem 6c344b23-2ce1-4a61-81ba-a1268f9a3fe2 Sep 5 00:02:22.824138 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:02:22.826515 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:02:22.826553 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:02:22.827908 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:02:22.852277 ignition[979]: INFO : Ignition 2.21.0 Sep 5 00:02:22.852277 ignition[979]: INFO : Stage: files Sep 5 00:02:22.854614 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:02:22.854614 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:02:22.854614 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:02:22.854614 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:02:22.854614 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:02:22.859696 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:02:22.859696 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:02:22.859696 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:02:22.859696 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 00:02:22.859696 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 00:02:22.856833 unknown[979]: wrote ssh authorized keys file for user: core Sep 5 00:02:22.913899 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:02:23.437740 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 00:02:23.437740 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:02:23.440869 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:02:23.450775 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:02:23.450775 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:02:23.450775 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:02:23.450775 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:02:23.450775 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:02:23.450775 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 00:02:23.532763 systemd-networkd[801]: eth0: Gained IPv6LL Sep 5 00:02:24.066010 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:02:24.685323 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 00:02:24.685323 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:02:24.688174 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:02:24.690186 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:02:24.690186 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:02:24.690186 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:02:24.694006 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:02:24.694006 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:02:24.694006 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:02:24.694006 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:02:24.708684 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:02:24.711370 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:02:24.712597 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:02:24.712597 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:02:24.712597 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:02:24.712597 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:02:24.718785 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:02:24.718785 ignition[979]: INFO : files: files passed Sep 5 00:02:24.718785 ignition[979]: INFO : Ignition finished successfully Sep 5 00:02:24.715596 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:02:24.718598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:02:24.720135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:02:24.735750 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:02:24.736544 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:02:24.737549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:02:24.739704 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:02:24.739704 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:02:24.742340 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:02:24.742062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:02:24.743543 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:02:24.745990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:02:24.774820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:02:24.774940 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:02:24.776538 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:02:24.778058 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:02:24.779427 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:02:24.780111 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:02:24.792964 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:02:24.794973 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:02:24.811560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:02:24.812482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:02:24.814299 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:02:24.815688 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:02:24.815794 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:02:24.817868 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:02:24.819320 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:02:24.820594 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:02:24.821900 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:02:24.823403 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:02:24.824995 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:02:24.826416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:02:24.827983 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:02:24.829413 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:02:24.831176 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:02:24.832487 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:02:24.833711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:02:24.833816 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:02:24.835620 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:02:24.837092 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:02:24.838734 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:02:24.839530 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:02:24.840505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:02:24.840622 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:02:24.842911 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:02:24.843025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:02:24.844472 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:02:24.845728 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:02:24.847221 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:02:24.848276 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:02:24.849433 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:02:24.850984 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:02:24.851063 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:02:24.852891 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:02:24.852963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:02:24.854215 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:02:24.854315 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:02:24.855645 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:02:24.855737 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:02:24.857529 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:02:24.859380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:02:24.860151 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:02:24.860277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:02:24.861961 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:02:24.862047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:02:24.867298 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:02:24.869583 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:02:24.877955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:02:24.881218 ignition[1036]: INFO : Ignition 2.21.0 Sep 5 00:02:24.881218 ignition[1036]: INFO : Stage: umount Sep 5 00:02:24.883615 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:02:24.883615 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:02:24.883615 ignition[1036]: INFO : umount: umount passed Sep 5 00:02:24.883615 ignition[1036]: INFO : Ignition finished successfully Sep 5 00:02:24.884700 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:02:24.884792 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:02:24.886152 systemd[1]: Stopped target network.target - Network. Sep 5 00:02:24.887119 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:02:24.887167 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:02:24.888507 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:02:24.888542 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:02:24.889986 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:02:24.890030 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:02:24.891255 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:02:24.891291 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:02:24.892688 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:02:24.893986 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:02:24.895394 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:02:24.895503 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:02:24.897148 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:02:24.897214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:02:24.900789 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:02:24.900890 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:02:24.904352 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:02:24.904701 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:02:24.904800 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:02:24.906940 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:02:24.907526 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:02:24.908652 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:02:24.908690 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:02:24.910873 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:02:24.912273 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:02:24.912321 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:02:24.913924 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:02:24.913966 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:02:24.916140 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:02:24.916177 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:02:24.917768 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:02:24.917806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:02:24.920181 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:02:24.924029 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:02:24.924084 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:02:24.929391 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:02:24.935666 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:02:24.936918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:02:24.936955 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:02:24.938306 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:02:24.938332 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:02:24.939753 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:02:24.939793 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:02:24.941992 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:02:24.942032 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:02:24.944075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:02:24.944118 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:02:24.946918 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:02:24.947763 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:02:24.947812 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:02:24.950638 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:02:24.950689 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:02:24.953334 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:02:24.953373 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:02:24.956213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:02:24.956248 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:02:24.958133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:02:24.958167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:02:24.961649 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 5 00:02:24.961697 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 5 00:02:24.961725 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 5 00:02:24.961753 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:02:24.961978 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:02:24.963619 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:02:24.967672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:02:24.967754 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:02:24.969702 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:02:24.973693 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:02:24.988344 systemd[1]: Switching root. Sep 5 00:02:25.035846 systemd-journald[244]: Journal stopped Sep 5 00:02:25.757421 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 5 00:02:25.757484 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:02:25.757503 kernel: SELinux: policy capability open_perms=1 Sep 5 00:02:25.757512 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:02:25.757523 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:02:25.757532 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:02:25.757542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:02:25.757551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:02:25.757560 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:02:25.757581 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:02:25.757592 kernel: audit: type=1403 audit(1757030545.188:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:02:25.757606 systemd[1]: Successfully loaded SELinux policy in 38.915ms. Sep 5 00:02:25.757621 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.217ms. Sep 5 00:02:25.757633 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:02:25.757645 systemd[1]: Detected virtualization kvm. Sep 5 00:02:25.757655 systemd[1]: Detected architecture arm64. Sep 5 00:02:25.757667 systemd[1]: Detected first boot. Sep 5 00:02:25.757677 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:02:25.757688 zram_generator::config[1083]: No configuration found. Sep 5 00:02:25.757698 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:02:25.757708 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:02:25.757720 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:02:25.757730 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:02:25.757740 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:02:25.757749 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:02:25.757759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:02:25.757770 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:02:25.757780 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:02:25.757791 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:02:25.757801 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:02:25.757812 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:02:25.757822 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:02:25.757832 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:02:25.757842 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:02:25.757852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:02:25.757862 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:02:25.757875 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:02:25.757886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:02:25.757898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:02:25.757909 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 00:02:25.757919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:02:25.757930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:02:25.757939 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:02:25.757949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:02:25.757959 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:02:25.757969 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:02:25.757981 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:02:25.757991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:02:25.758001 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:02:25.758010 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:02:25.758020 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:02:25.758030 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:02:25.758040 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:02:25.758050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:02:25.758060 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:02:25.758071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:02:25.758081 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:02:25.758092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:02:25.758101 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:02:25.758112 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:02:25.758122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:02:25.758132 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:02:25.758141 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:02:25.758151 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:02:25.758162 systemd[1]: Reached target machines.target - Containers. Sep 5 00:02:25.758172 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:02:25.758182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:02:25.758192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:02:25.758202 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:02:25.758214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:02:25.758224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:02:25.758234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:02:25.758245 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:02:25.758255 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:02:25.758265 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:02:25.758275 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:02:25.758285 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:02:25.758295 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:02:25.758305 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:02:25.758315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:02:25.758327 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:02:25.758338 kernel: loop: module loaded Sep 5 00:02:25.758348 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:02:25.758357 kernel: fuse: init (API version 7.41) Sep 5 00:02:25.758369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:02:25.758379 kernel: ACPI: bus type drm_connector registered Sep 5 00:02:25.758388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:02:25.758398 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:02:25.758408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:02:25.758420 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:02:25.758430 systemd[1]: Stopped verity-setup.service. Sep 5 00:02:25.758483 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:02:25.758495 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:02:25.758505 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:02:25.758517 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:02:25.758548 systemd-journald[1151]: Collecting audit messages is disabled. Sep 5 00:02:25.758577 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:02:25.758589 systemd-journald[1151]: Journal started Sep 5 00:02:25.758611 systemd-journald[1151]: Runtime Journal (/run/log/journal/a8d8517e2c33438e84ad57ecac658299) is 6M, max 48.5M, 42.4M free. Sep 5 00:02:25.554028 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:02:25.579217 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:02:25.579589 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:02:25.760540 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:02:25.761121 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:02:25.763460 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:02:25.764619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:02:25.765797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:02:25.766046 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:02:25.767256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:02:25.767416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:02:25.768524 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:02:25.768692 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:02:25.769731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:02:25.769877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:02:25.771197 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:02:25.771353 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:02:25.773745 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:02:25.773898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:02:25.775008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:02:25.776334 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:02:25.777667 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:02:25.778841 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:02:25.791705 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:02:25.793811 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:02:25.795637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:02:25.796498 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:02:25.796540 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:02:25.798226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:02:25.804390 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:02:25.805409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:02:25.806619 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:02:25.808377 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:02:25.809500 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:02:25.812589 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:02:25.813973 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:02:25.814884 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:02:25.817174 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:02:25.818310 systemd-journald[1151]: Time spent on flushing to /var/log/journal/a8d8517e2c33438e84ad57ecac658299 is 21.719ms for 891 entries. Sep 5 00:02:25.818310 systemd-journald[1151]: System Journal (/var/log/journal/a8d8517e2c33438e84ad57ecac658299) is 8M, max 195.6M, 187.6M free. Sep 5 00:02:25.845445 systemd-journald[1151]: Received client request to flush runtime journal. Sep 5 00:02:25.845503 kernel: loop0: detected capacity change from 0 to 138376 Sep 5 00:02:25.819953 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:02:25.823479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:02:25.825268 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:02:25.826356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:02:25.833262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:02:25.836193 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:02:25.839846 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:02:25.842039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:02:25.854726 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:02:25.859309 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 5 00:02:25.863825 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:02:25.859322 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 5 00:02:25.864939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:02:25.867686 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:02:25.878522 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:02:25.891471 kernel: loop1: detected capacity change from 0 to 211168 Sep 5 00:02:25.907487 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:02:25.909772 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:02:25.914498 kernel: loop2: detected capacity change from 0 to 107312 Sep 5 00:02:25.934304 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 5 00:02:25.934633 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 5 00:02:25.939165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:02:25.942489 kernel: loop3: detected capacity change from 0 to 138376 Sep 5 00:02:25.952469 kernel: loop4: detected capacity change from 0 to 211168 Sep 5 00:02:25.958461 kernel: loop5: detected capacity change from 0 to 107312 Sep 5 00:02:25.962724 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:02:25.963062 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 5 00:02:25.967771 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:02:25.967789 systemd[1]: Reloading... Sep 5 00:02:26.036472 zram_generator::config[1258]: No configuration found. Sep 5 00:02:26.083627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:02:26.087053 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:02:26.146003 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:02:26.146195 systemd[1]: Reloading finished in 178 ms. Sep 5 00:02:26.162471 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:02:26.164111 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:02:26.175855 systemd[1]: Starting ensure-sysext.service... Sep 5 00:02:26.177488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:02:26.185880 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:02:26.185896 systemd[1]: Reloading... Sep 5 00:02:26.192717 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:02:26.193000 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:02:26.193291 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:02:26.193648 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:02:26.194427 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:02:26.194864 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 5 00:02:26.194981 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 5 00:02:26.197941 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:02:26.198047 systemd-tmpfiles[1285]: Skipping /boot Sep 5 00:02:26.206688 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:02:26.206788 systemd-tmpfiles[1285]: Skipping /boot Sep 5 00:02:26.223471 zram_generator::config[1312]: No configuration found. Sep 5 00:02:26.296767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:02:26.358207 systemd[1]: Reloading finished in 172 ms. Sep 5 00:02:26.381942 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:02:26.388470 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:02:26.402532 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:02:26.404626 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:02:26.416103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:02:26.420962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:02:26.423369 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:02:26.425811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:02:26.433696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:02:26.435523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:02:26.437419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:02:26.439670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:02:26.441610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:02:26.441733 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:02:26.447076 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:02:26.450474 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:02:26.452933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:02:26.453082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:02:26.454667 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:02:26.454820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:02:26.457983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:02:26.458322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:02:26.460127 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 5 00:02:26.463350 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:02:26.470092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:02:26.471528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:02:26.473347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:02:26.475261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:02:26.490371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:02:26.491951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:02:26.492071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:02:26.493226 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:02:26.494794 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:02:26.495848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:02:26.497893 augenrules[1404]: No rules Sep 5 00:02:26.499451 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:02:26.502035 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:02:26.504518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:02:26.507132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:02:26.508516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:02:26.509964 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:02:26.510102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:02:26.513026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:02:26.513653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:02:26.515899 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:02:26.516325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:02:26.527701 systemd[1]: Finished ensure-sysext.service. Sep 5 00:02:26.549060 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 00:02:26.557091 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:02:26.559222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:02:26.577164 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:02:26.578067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:02:26.578124 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:02:26.579861 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:02:26.589774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:02:26.595713 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:02:26.613897 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:02:26.672740 systemd-networkd[1438]: lo: Link UP Sep 5 00:02:26.672750 systemd-networkd[1438]: lo: Gained carrier Sep 5 00:02:26.673319 systemd-resolved[1351]: Positive Trust Anchors: Sep 5 00:02:26.673334 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:02:26.673366 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:02:26.673525 systemd-networkd[1438]: Enumeration completed Sep 5 00:02:26.673728 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:02:26.674117 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:02:26.674127 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:02:26.675265 systemd-networkd[1438]: eth0: Link UP Sep 5 00:02:26.675370 systemd-networkd[1438]: eth0: Gained carrier Sep 5 00:02:26.675389 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:02:26.678053 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:02:26.681340 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 5 00:02:26.684172 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:02:26.685615 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:02:26.686505 systemd[1]: Reached target network.target - Network. Sep 5 00:02:26.687195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:02:26.690062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:02:26.691762 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:02:26.693585 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:02:26.694513 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:02:26.695422 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:02:26.697511 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:02:26.697543 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:02:26.698239 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:02:26.699199 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:02:26.700520 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:02:26.702232 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:02:26.704373 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:02:26.705535 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:02:26.707877 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:02:26.711900 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:02:26.714366 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:02:26.717632 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:02:26.718863 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Sep 5 00:02:26.720463 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:02:26.720516 systemd-timesyncd[1441]: Initial clock synchronization to Fri 2025-09-05 00:02:26.580209 UTC. Sep 5 00:02:26.721242 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:02:26.723038 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:02:26.726176 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:02:26.735388 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:02:26.736282 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:02:26.737100 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:02:26.737127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:02:26.740251 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:02:26.742485 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:02:26.744718 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:02:26.747189 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:02:26.750144 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:02:26.751155 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:02:26.752112 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:02:26.753929 jq[1469]: false Sep 5 00:02:26.754573 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:02:26.756206 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:02:26.759640 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:02:26.763846 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:02:26.764706 extend-filesystems[1470]: Found /dev/vda6 Sep 5 00:02:26.767641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:02:26.769231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:02:26.770746 extend-filesystems[1470]: Found /dev/vda9 Sep 5 00:02:26.769663 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:02:26.770327 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:02:26.772964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:02:26.774968 extend-filesystems[1470]: Checking size of /dev/vda9 Sep 5 00:02:26.777472 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:02:26.780979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:02:26.782537 jq[1489]: true Sep 5 00:02:26.782374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:02:26.784489 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:02:26.784907 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:02:26.785071 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:02:26.787788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:02:26.787986 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:02:26.801295 extend-filesystems[1470]: Resized partition /dev/vda9 Sep 5 00:02:26.804556 extend-filesystems[1511]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:02:26.807109 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:02:26.807152 update_engine[1486]: I20250905 00:02:26.802603 1486 main.cc:92] Flatcar Update Engine starting Sep 5 00:02:26.810759 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:02:26.818462 jq[1497]: true Sep 5 00:02:26.830980 tar[1496]: linux-arm64/LICENSE Sep 5 00:02:26.831171 tar[1496]: linux-arm64/helm Sep 5 00:02:26.831459 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:02:26.847197 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:02:26.847197 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:02:26.847197 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:02:26.850249 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Sep 5 00:02:26.848688 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:02:26.852937 dbus-daemon[1467]: [system] SELinux support is enabled Sep 5 00:02:26.850498 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:02:26.857043 update_engine[1486]: I20250905 00:02:26.856976 1486 update_check_scheduler.cc:74] Next update check in 8m29s Sep 5 00:02:26.858160 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 00:02:26.859884 systemd-logind[1480]: New seat seat0. Sep 5 00:02:26.869656 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:02:26.872240 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:02:26.879998 dbus-daemon[1467]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:02:26.889484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:02:26.891179 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:02:26.893241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:02:26.893395 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:02:26.895666 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:02:26.895780 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:02:26.899086 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:02:26.903525 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:02:26.919496 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:02:26.921283 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:02:26.950458 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:02:27.000450 containerd[1498]: time="2025-09-05T00:02:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:02:27.001538 containerd[1498]: time="2025-09-05T00:02:27.001503943Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 5 00:02:27.010961 containerd[1498]: time="2025-09-05T00:02:27.010916628Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.589µs" Sep 5 00:02:27.010961 containerd[1498]: time="2025-09-05T00:02:27.010954394Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:02:27.011043 containerd[1498]: time="2025-09-05T00:02:27.010973336Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:02:27.011141 containerd[1498]: time="2025-09-05T00:02:27.011119922Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:02:27.011173 containerd[1498]: time="2025-09-05T00:02:27.011142126Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:02:27.011173 containerd[1498]: time="2025-09-05T00:02:27.011164802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011234 containerd[1498]: time="2025-09-05T00:02:27.011216716Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011260 containerd[1498]: time="2025-09-05T00:02:27.011232082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011549 containerd[1498]: time="2025-09-05T00:02:27.011522934Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011549 containerd[1498]: time="2025-09-05T00:02:27.011546317Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011605 containerd[1498]: time="2025-09-05T00:02:27.011559640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011605 containerd[1498]: time="2025-09-05T00:02:27.011568796Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011662 containerd[1498]: time="2025-09-05T00:02:27.011647866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:02:27.011841 containerd[1498]: time="2025-09-05T00:02:27.011817992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:02:27.012366 containerd[1498]: time="2025-09-05T00:02:27.011855169Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:02:27.012366 containerd[1498]: time="2025-09-05T00:02:27.011875683Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:02:27.012366 containerd[1498]: time="2025-09-05T00:02:27.011905865Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:02:27.012366 containerd[1498]: time="2025-09-05T00:02:27.012125154Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:02:27.012366 containerd[1498]: time="2025-09-05T00:02:27.012228314Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:02:27.016275 containerd[1498]: time="2025-09-05T00:02:27.016207898Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:02:27.016375 containerd[1498]: time="2025-09-05T00:02:27.016354169Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:02:27.016399 containerd[1498]: time="2025-09-05T00:02:27.016379360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:02:27.016454 containerd[1498]: time="2025-09-05T00:02:27.016395316Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:02:27.016480 containerd[1498]: time="2025-09-05T00:02:27.016458312Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:02:27.016480 containerd[1498]: time="2025-09-05T00:02:27.016471241Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:02:27.016511 containerd[1498]: time="2025-09-05T00:02:27.016482835Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:02:27.016511 containerd[1498]: time="2025-09-05T00:02:27.016495489Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:02:27.016511 containerd[1498]: time="2025-09-05T00:02:27.016506178Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:02:27.016562 containerd[1498]: time="2025-09-05T00:02:27.016516357Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:02:27.016562 containerd[1498]: time="2025-09-05T00:02:27.016526300Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:02:27.016562 containerd[1498]: time="2025-09-05T00:02:27.016539229Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:02:27.016690 containerd[1498]: time="2025-09-05T00:02:27.016670567Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:02:27.016715 containerd[1498]: time="2025-09-05T00:02:27.016696740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:02:27.017034 containerd[1498]: time="2025-09-05T00:02:27.017002526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:02:27.017099 containerd[1498]: time="2025-09-05T00:02:27.017068195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:02:27.017131 containerd[1498]: time="2025-09-05T00:02:27.017107651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:02:27.017149 containerd[1498]: time="2025-09-05T00:02:27.017135711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:02:27.017192 containerd[1498]: time="2025-09-05T00:02:27.017176228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:02:27.017222 containerd[1498]: time="2025-09-05T00:02:27.017198746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:02:27.017222 containerd[1498]: time="2025-09-05T00:02:27.017213130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:02:27.017268 containerd[1498]: time="2025-09-05T00:02:27.017230068Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:02:27.017268 containerd[1498]: time="2025-09-05T00:02:27.017245945Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:02:27.017489 containerd[1498]: time="2025-09-05T00:02:27.017468692Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:02:27.017516 containerd[1498]: time="2025-09-05T00:02:27.017498520Z" level=info msg="Start snapshots syncer" Sep 5 00:02:27.017534 containerd[1498]: time="2025-09-05T00:02:27.017523121Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:02:27.017891 containerd[1498]: time="2025-09-05T00:02:27.017853862Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:02:27.017981 containerd[1498]: time="2025-09-05T00:02:27.017918234Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:02:27.018029 containerd[1498]: time="2025-09-05T00:02:27.018007090Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:02:27.018162 containerd[1498]: time="2025-09-05T00:02:27.018139881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:02:27.018189 containerd[1498]: time="2025-09-05T00:02:27.018177491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:02:27.018207 containerd[1498]: time="2025-09-05T00:02:27.018193839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:02:27.018274 containerd[1498]: time="2025-09-05T00:02:27.018222331Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:02:27.018274 containerd[1498]: time="2025-09-05T00:02:27.018236400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:02:27.018274 containerd[1498]: time="2025-09-05T00:02:27.018262809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:02:27.018328 containerd[1498]: time="2025-09-05T00:02:27.018278489Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:02:27.018328 containerd[1498]: time="2025-09-05T00:02:27.018322111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:02:27.018367 containerd[1498]: time="2025-09-05T00:02:27.018337634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:02:27.018367 containerd[1498]: time="2025-09-05T00:02:27.018353433Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:02:27.018402 containerd[1498]: time="2025-09-05T00:02:27.018389981Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:02:27.018441 containerd[1498]: time="2025-09-05T00:02:27.018408530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:02:27.018464 containerd[1498]: time="2025-09-05T00:02:27.018433249Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:02:27.018483 containerd[1498]: time="2025-09-05T00:02:27.018460523Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:02:27.018483 containerd[1498]: time="2025-09-05T00:02:27.018474081Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:02:27.018777 containerd[1498]: time="2025-09-05T00:02:27.018672699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:02:27.018813 containerd[1498]: time="2025-09-05T00:02:27.018787884Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:02:27.018885 containerd[1498]: time="2025-09-05T00:02:27.018870137Z" level=info msg="runtime interface created" Sep 5 00:02:27.018956 containerd[1498]: time="2025-09-05T00:02:27.018878940Z" level=info msg="created NRI interface" Sep 5 00:02:27.018982 containerd[1498]: time="2025-09-05T00:02:27.018958089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:02:27.018982 containerd[1498]: time="2025-09-05T00:02:27.018975655Z" level=info msg="Connect containerd service" Sep 5 00:02:27.019027 containerd[1498]: time="2025-09-05T00:02:27.019012361Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:02:27.019962 containerd[1498]: time="2025-09-05T00:02:27.019931999Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:02:27.104070 containerd[1498]: time="2025-09-05T00:02:27.104003546Z" level=info msg="Start subscribing containerd event" Sep 5 00:02:27.104165 containerd[1498]: time="2025-09-05T00:02:27.104081633Z" level=info msg="Start recovering state" Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104208491Z" level=info msg="Start event monitor" Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104229555Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104239537Z" level=info msg="Start streaming server" Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104248340Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104274749Z" level=info msg="runtime interface starting up..." Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104283906Z" level=info msg="starting plugins..." Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104299704Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104302062Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104353779Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:02:27.105446 containerd[1498]: time="2025-09-05T00:02:27.104675128Z" level=info msg="containerd successfully booted in 0.104975s" Sep 5 00:02:27.104542 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:02:27.231355 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:02:27.247310 tar[1496]: linux-arm64/README.md Sep 5 00:02:27.250322 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:02:27.261772 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:02:27.263783 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:02:27.270271 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:02:27.270507 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:02:27.273677 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:02:27.303149 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:02:27.305931 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:02:27.307857 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 00:02:27.308969 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:02:28.268582 systemd-networkd[1438]: eth0: Gained IPv6LL Sep 5 00:02:28.273908 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:02:28.275419 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:02:28.277586 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:02:28.279708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:28.287966 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:02:28.300906 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:02:28.301144 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:02:28.302597 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:02:28.304858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:02:28.832877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:28.834192 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:02:28.836707 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:02:28.837128 systemd[1]: Startup finished in 2.001s (kernel) + 5.599s (initrd) + 3.687s (userspace) = 11.288s. Sep 5 00:02:29.161999 kubelet[1606]: E0905 00:02:29.161893 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:02:29.164532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:02:29.164658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:02:29.164972 systemd[1]: kubelet.service: Consumed 734ms CPU time, 255.8M memory peak. Sep 5 00:02:32.757897 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:02:32.759120 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:33250.service - OpenSSH per-connection server daemon (10.0.0.1:33250). Sep 5 00:02:32.820326 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 33250 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:32.822268 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:32.827871 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:02:32.828797 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:02:32.833911 systemd-logind[1480]: New session 1 of user core. Sep 5 00:02:32.851494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:02:32.854369 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:02:32.874587 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:02:32.876785 systemd-logind[1480]: New session c1 of user core. Sep 5 00:02:32.978609 systemd[1624]: Queued start job for default target default.target. Sep 5 00:02:32.989289 systemd[1624]: Created slice app.slice - User Application Slice. Sep 5 00:02:32.989317 systemd[1624]: Reached target paths.target - Paths. Sep 5 00:02:32.989351 systemd[1624]: Reached target timers.target - Timers. Sep 5 00:02:32.990534 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:02:32.999142 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:02:32.999198 systemd[1624]: Reached target sockets.target - Sockets. Sep 5 00:02:32.999231 systemd[1624]: Reached target basic.target - Basic System. Sep 5 00:02:32.999257 systemd[1624]: Reached target default.target - Main User Target. Sep 5 00:02:32.999280 systemd[1624]: Startup finished in 116ms. Sep 5 00:02:32.999488 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:02:33.001047 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:02:33.059142 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:33256.service - OpenSSH per-connection server daemon (10.0.0.1:33256). Sep 5 00:02:33.118612 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 33256 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:33.119588 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:33.123804 systemd-logind[1480]: New session 2 of user core. Sep 5 00:02:33.139613 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:02:33.190033 sshd[1637]: Connection closed by 10.0.0.1 port 33256 Sep 5 00:02:33.190549 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:33.199312 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:33256.service: Deactivated successfully. Sep 5 00:02:33.202971 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:02:33.203914 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:02:33.206682 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:33258.service - OpenSSH per-connection server daemon (10.0.0.1:33258). Sep 5 00:02:33.207713 systemd-logind[1480]: Removed session 2. Sep 5 00:02:33.266879 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 33258 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:33.268146 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:33.272500 systemd-logind[1480]: New session 3 of user core. Sep 5 00:02:33.282597 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:02:33.328866 sshd[1645]: Connection closed by 10.0.0.1 port 33258 Sep 5 00:02:33.329359 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:33.338224 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:33258.service: Deactivated successfully. Sep 5 00:02:33.340586 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:02:33.341137 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:02:33.343385 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:33262.service - OpenSSH per-connection server daemon (10.0.0.1:33262). Sep 5 00:02:33.343870 systemd-logind[1480]: Removed session 3. Sep 5 00:02:33.403318 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 33262 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:33.405420 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:33.409340 systemd-logind[1480]: New session 4 of user core. Sep 5 00:02:33.418599 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:02:33.471895 sshd[1653]: Connection closed by 10.0.0.1 port 33262 Sep 5 00:02:33.472217 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:33.482376 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:33262.service: Deactivated successfully. Sep 5 00:02:33.484862 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:02:33.486949 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:02:33.489069 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:33276.service - OpenSSH per-connection server daemon (10.0.0.1:33276). Sep 5 00:02:33.489772 systemd-logind[1480]: Removed session 4. Sep 5 00:02:33.557989 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 33276 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:33.558454 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:33.562508 systemd-logind[1480]: New session 5 of user core. Sep 5 00:02:33.578598 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:02:33.640502 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:02:33.641126 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:02:33.666017 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 5 00:02:33.668617 sshd[1661]: Connection closed by 10.0.0.1 port 33276 Sep 5 00:02:33.667741 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:33.684780 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:33276.service: Deactivated successfully. Sep 5 00:02:33.686207 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:02:33.686977 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:02:33.689914 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:33280.service - OpenSSH per-connection server daemon (10.0.0.1:33280). Sep 5 00:02:33.690862 systemd-logind[1480]: Removed session 5. Sep 5 00:02:33.765233 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 33280 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:33.766469 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:33.770676 systemd-logind[1480]: New session 6 of user core. Sep 5 00:02:33.781582 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:02:33.832772 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:02:33.833025 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:02:33.905934 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 5 00:02:33.910566 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:02:33.910805 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:02:33.918311 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:02:33.958714 augenrules[1694]: No rules Sep 5 00:02:33.959803 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:02:33.960004 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:02:33.961286 sudo[1671]: pam_unix(sudo:session): session closed for user root Sep 5 00:02:33.963539 sshd[1670]: Connection closed by 10.0.0.1 port 33280 Sep 5 00:02:33.963893 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 5 00:02:33.972173 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:33280.service: Deactivated successfully. Sep 5 00:02:33.973627 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:02:33.974173 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:02:33.976602 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:33288.service - OpenSSH per-connection server daemon (10.0.0.1:33288). Sep 5 00:02:33.977493 systemd-logind[1480]: Removed session 6. Sep 5 00:02:34.031270 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 33288 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:02:34.032555 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:02:34.036509 systemd-logind[1480]: New session 7 of user core. Sep 5 00:02:34.045566 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:02:34.096120 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:02:34.096694 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:02:34.389488 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:02:34.409811 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:02:34.613766 dockerd[1726]: time="2025-09-05T00:02:34.613707576Z" level=info msg="Starting up" Sep 5 00:02:34.616465 dockerd[1726]: time="2025-09-05T00:02:34.616426100Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:02:34.639168 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1936901274-merged.mount: Deactivated successfully. Sep 5 00:02:34.660660 dockerd[1726]: time="2025-09-05T00:02:34.660565671Z" level=info msg="Loading containers: start." Sep 5 00:02:34.668462 kernel: Initializing XFRM netlink socket Sep 5 00:02:34.846105 systemd-networkd[1438]: docker0: Link UP Sep 5 00:02:34.849378 dockerd[1726]: time="2025-09-05T00:02:34.849335110Z" level=info msg="Loading containers: done." Sep 5 00:02:34.862293 dockerd[1726]: time="2025-09-05T00:02:34.862243631Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:02:34.862429 dockerd[1726]: time="2025-09-05T00:02:34.862333329Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 5 00:02:34.862477 dockerd[1726]: time="2025-09-05T00:02:34.862456119Z" level=info msg="Initializing buildkit" Sep 5 00:02:34.882903 dockerd[1726]: time="2025-09-05T00:02:34.882860305Z" level=info msg="Completed buildkit initialization" Sep 5 00:02:34.888601 dockerd[1726]: time="2025-09-05T00:02:34.888559695Z" level=info msg="Daemon has completed initialization" Sep 5 00:02:34.888735 dockerd[1726]: time="2025-09-05T00:02:34.888651658Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:02:34.888813 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:02:35.417483 containerd[1498]: time="2025-09-05T00:02:35.417126948Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:02:35.637336 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2533724721-merged.mount: Deactivated successfully. Sep 5 00:02:36.045097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481212652.mount: Deactivated successfully. Sep 5 00:02:37.050171 containerd[1498]: time="2025-09-05T00:02:37.050105786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:37.051000 containerd[1498]: time="2025-09-05T00:02:37.050963016Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 5 00:02:37.051421 containerd[1498]: time="2025-09-05T00:02:37.051390397Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:37.054025 containerd[1498]: time="2025-09-05T00:02:37.053984981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:37.054783 containerd[1498]: time="2025-09-05T00:02:37.054748565Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.637580731s" Sep 5 00:02:37.054817 containerd[1498]: time="2025-09-05T00:02:37.054784717Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 00:02:37.055859 containerd[1498]: time="2025-09-05T00:02:37.055837402Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:02:38.366088 containerd[1498]: time="2025-09-05T00:02:38.366032300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:38.367803 containerd[1498]: time="2025-09-05T00:02:38.367721619Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 5 00:02:38.368338 containerd[1498]: time="2025-09-05T00:02:38.368303701Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:38.371288 containerd[1498]: time="2025-09-05T00:02:38.371256782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:38.372208 containerd[1498]: time="2025-09-05T00:02:38.372180600Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.316315642s" Sep 5 00:02:38.372208 containerd[1498]: time="2025-09-05T00:02:38.372206376Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 00:02:38.372969 containerd[1498]: time="2025-09-05T00:02:38.372929368Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:02:39.414994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:02:39.416452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:39.572290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:39.575856 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:02:39.583609 containerd[1498]: time="2025-09-05T00:02:39.582663438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:39.583609 containerd[1498]: time="2025-09-05T00:02:39.583252871Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 5 00:02:39.584094 containerd[1498]: time="2025-09-05T00:02:39.584069737Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:39.587675 containerd[1498]: time="2025-09-05T00:02:39.587625424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:39.588782 containerd[1498]: time="2025-09-05T00:02:39.588749722Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.215782941s" Sep 5 00:02:39.588854 containerd[1498]: time="2025-09-05T00:02:39.588789501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 00:02:39.589190 containerd[1498]: time="2025-09-05T00:02:39.589164891Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:02:39.615970 kubelet[2008]: E0905 00:02:39.615902 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:02:39.619151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:02:39.619266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:02:39.620536 systemd[1]: kubelet.service: Consumed 141ms CPU time, 106.3M memory peak. Sep 5 00:02:40.540218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106941739.mount: Deactivated successfully. Sep 5 00:02:40.943887 containerd[1498]: time="2025-09-05T00:02:40.943767724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:40.944335 containerd[1498]: time="2025-09-05T00:02:40.944305498Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 5 00:02:40.945035 containerd[1498]: time="2025-09-05T00:02:40.944996837Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:40.946844 containerd[1498]: time="2025-09-05T00:02:40.946818633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:40.947472 containerd[1498]: time="2025-09-05T00:02:40.947211815Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.358018899s" Sep 5 00:02:40.947472 containerd[1498]: time="2025-09-05T00:02:40.947239131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 00:02:40.947883 containerd[1498]: time="2025-09-05T00:02:40.947866348Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:02:41.479165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341596581.mount: Deactivated successfully. Sep 5 00:02:42.165846 containerd[1498]: time="2025-09-05T00:02:42.165476364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:42.165846 containerd[1498]: time="2025-09-05T00:02:42.165797962Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 5 00:02:42.166746 containerd[1498]: time="2025-09-05T00:02:42.166716944Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:42.169509 containerd[1498]: time="2025-09-05T00:02:42.169474528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:42.170602 containerd[1498]: time="2025-09-05T00:02:42.170571928Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.222631044s" Sep 5 00:02:42.170715 containerd[1498]: time="2025-09-05T00:02:42.170698508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 00:02:42.171578 containerd[1498]: time="2025-09-05T00:02:42.171550808Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:02:42.597925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405605089.mount: Deactivated successfully. Sep 5 00:02:42.601090 containerd[1498]: time="2025-09-05T00:02:42.601055253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:02:42.602128 containerd[1498]: time="2025-09-05T00:02:42.602101733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 00:02:42.602933 containerd[1498]: time="2025-09-05T00:02:42.602887510Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:02:42.604856 containerd[1498]: time="2025-09-05T00:02:42.604642830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:02:42.605517 containerd[1498]: time="2025-09-05T00:02:42.605489304Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 433.730869ms" Sep 5 00:02:42.605558 containerd[1498]: time="2025-09-05T00:02:42.605524182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 00:02:42.605955 containerd[1498]: time="2025-09-05T00:02:42.605938041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:02:43.051555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115465675.mount: Deactivated successfully. Sep 5 00:02:44.756467 containerd[1498]: time="2025-09-05T00:02:44.756112389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:44.756774 containerd[1498]: time="2025-09-05T00:02:44.756731107Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 5 00:02:44.757716 containerd[1498]: time="2025-09-05T00:02:44.757686214Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:44.760995 containerd[1498]: time="2025-09-05T00:02:44.760946821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:02:44.762698 containerd[1498]: time="2025-09-05T00:02:44.762670176Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.156705953s" Sep 5 00:02:44.762825 containerd[1498]: time="2025-09-05T00:02:44.762784968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 00:02:49.676677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:02:49.681654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:49.815776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:49.819536 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:02:49.853759 kubelet[2169]: E0905 00:02:49.853709 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:02:49.856640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:02:49.856751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:02:49.857013 systemd[1]: kubelet.service: Consumed 133ms CPU time, 105.7M memory peak. Sep 5 00:02:49.862278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:49.862408 systemd[1]: kubelet.service: Consumed 133ms CPU time, 105.7M memory peak. Sep 5 00:02:49.864356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:49.883532 systemd[1]: Reload requested from client PID 2183 ('systemctl') (unit session-7.scope)... Sep 5 00:02:49.883545 systemd[1]: Reloading... Sep 5 00:02:49.948813 zram_generator::config[2227]: No configuration found. Sep 5 00:02:50.033785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:02:50.117485 systemd[1]: Reloading finished in 233 ms. Sep 5 00:02:50.162668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:50.165245 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:50.166349 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:02:50.166584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:50.166626 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Sep 5 00:02:50.168021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:50.307374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:50.310567 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:02:50.338452 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:02:50.338452 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:02:50.338452 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:02:50.338752 kubelet[2274]: I0905 00:02:50.338493 2274 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:02:51.151555 kubelet[2274]: I0905 00:02:51.151506 2274 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:02:51.151555 kubelet[2274]: I0905 00:02:51.151534 2274 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:02:51.151763 kubelet[2274]: I0905 00:02:51.151737 2274 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:02:51.180460 kubelet[2274]: E0905 00:02:51.179749 2274 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:02:51.182754 kubelet[2274]: I0905 00:02:51.182720 2274 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:02:51.194196 kubelet[2274]: I0905 00:02:51.194171 2274 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:02:51.196643 kubelet[2274]: I0905 00:02:51.196626 2274 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:02:51.196918 kubelet[2274]: I0905 00:02:51.196882 2274 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:02:51.197059 kubelet[2274]: I0905 00:02:51.196908 2274 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:02:51.197141 kubelet[2274]: I0905 00:02:51.197120 2274 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:02:51.197141 kubelet[2274]: I0905 00:02:51.197128 2274 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:02:51.197318 kubelet[2274]: I0905 00:02:51.197305 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:02:51.200173 kubelet[2274]: I0905 00:02:51.200069 2274 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:02:51.200173 kubelet[2274]: I0905 00:02:51.200092 2274 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:02:51.200173 kubelet[2274]: I0905 00:02:51.200114 2274 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:02:51.201610 kubelet[2274]: I0905 00:02:51.201590 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:02:51.202624 kubelet[2274]: I0905 00:02:51.202581 2274 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 5 00:02:51.203333 kubelet[2274]: I0905 00:02:51.203299 2274 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:02:51.203460 kubelet[2274]: W0905 00:02:51.203429 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:02:51.203733 kubelet[2274]: E0905 00:02:51.203703 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:02:51.204751 kubelet[2274]: E0905 00:02:51.204726 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:02:51.213564 kubelet[2274]: I0905 00:02:51.213321 2274 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:02:51.213564 kubelet[2274]: I0905 00:02:51.213537 2274 server.go:1289] "Started kubelet" Sep 5 00:02:51.219349 kubelet[2274]: I0905 00:02:51.219114 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:02:51.219857 kubelet[2274]: I0905 00:02:51.219833 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:02:51.223410 kubelet[2274]: I0905 00:02:51.223388 2274 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:02:51.223620 kubelet[2274]: E0905 00:02:51.223594 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:02:51.223916 kubelet[2274]: I0905 00:02:51.223889 2274 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:02:51.223916 kubelet[2274]: E0905 00:02:51.221590 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186239fed0aea072 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:02:51.213496434 +0000 UTC m=+0.899891428,LastTimestamp:2025-09-05 00:02:51.213496434 +0000 UTC m=+0.899891428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:02:51.224036 kubelet[2274]: I0905 00:02:51.224019 2274 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:02:51.224293 kubelet[2274]: E0905 00:02:51.224253 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Sep 5 00:02:51.224450 kubelet[2274]: E0905 00:02:51.224405 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:02:51.224593 kubelet[2274]: I0905 00:02:51.224556 2274 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:02:51.224933 kubelet[2274]: I0905 00:02:51.224886 2274 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:02:51.226282 kubelet[2274]: I0905 00:02:51.225565 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:02:51.226282 kubelet[2274]: I0905 00:02:51.225801 2274 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:02:51.226517 kubelet[2274]: I0905 00:02:51.226351 2274 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:02:51.228087 kubelet[2274]: I0905 00:02:51.219232 2274 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:02:51.230494 kubelet[2274]: I0905 00:02:51.230463 2274 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:02:51.239959 kubelet[2274]: I0905 00:02:51.239910 2274 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:02:51.240060 kubelet[2274]: I0905 00:02:51.240048 2274 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:02:51.240153 kubelet[2274]: I0905 00:02:51.240141 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:02:51.243747 kubelet[2274]: I0905 00:02:51.243694 2274 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:02:51.245075 kubelet[2274]: I0905 00:02:51.245022 2274 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:02:51.245075 kubelet[2274]: I0905 00:02:51.245050 2274 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:02:51.245075 kubelet[2274]: I0905 00:02:51.245069 2274 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:02:51.245075 kubelet[2274]: I0905 00:02:51.245078 2274 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:02:51.245207 kubelet[2274]: E0905 00:02:51.245118 2274 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:02:51.248318 kubelet[2274]: E0905 00:02:51.248149 2274 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:02:51.324623 kubelet[2274]: E0905 00:02:51.324588 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:02:51.330801 kubelet[2274]: I0905 00:02:51.330779 2274 policy_none.go:49] "None policy: Start" Sep 5 00:02:51.330969 kubelet[2274]: I0905 00:02:51.330910 2274 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:02:51.330969 kubelet[2274]: I0905 00:02:51.330929 2274 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:02:51.337303 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:02:51.345574 kubelet[2274]: E0905 00:02:51.345544 2274 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:02:51.363696 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:02:51.385930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:02:51.387513 kubelet[2274]: E0905 00:02:51.387490 2274 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:02:51.387707 kubelet[2274]: I0905 00:02:51.387682 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:02:51.387752 kubelet[2274]: I0905 00:02:51.387700 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:02:51.387987 kubelet[2274]: I0905 00:02:51.387956 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:02:51.389944 kubelet[2274]: E0905 00:02:51.389928 2274 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:02:51.390155 kubelet[2274]: E0905 00:02:51.390137 2274 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:02:51.425704 kubelet[2274]: E0905 00:02:51.425603 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Sep 5 00:02:51.489838 kubelet[2274]: I0905 00:02:51.489808 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:02:51.490268 kubelet[2274]: E0905 00:02:51.490245 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 5 00:02:51.555559 systemd[1]: Created slice kubepods-burstable-pod6a7ed3bb080ee757ed0411b26a5bb48a.slice - libcontainer container kubepods-burstable-pod6a7ed3bb080ee757ed0411b26a5bb48a.slice. Sep 5 00:02:51.568907 kubelet[2274]: E0905 00:02:51.568869 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:51.571724 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:02:51.586561 kubelet[2274]: E0905 00:02:51.586499 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:51.588808 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:02:51.590262 kubelet[2274]: E0905 00:02:51.590234 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:51.626520 kubelet[2274]: I0905 00:02:51.626458 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:51.626520 kubelet[2274]: I0905 00:02:51.626497 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a7ed3bb080ee757ed0411b26a5bb48a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a7ed3bb080ee757ed0411b26a5bb48a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:51.626660 kubelet[2274]: I0905 00:02:51.626553 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a7ed3bb080ee757ed0411b26a5bb48a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a7ed3bb080ee757ed0411b26a5bb48a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:51.626660 kubelet[2274]: I0905 00:02:51.626593 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:51.626660 kubelet[2274]: I0905 00:02:51.626622 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:51.626660 kubelet[2274]: I0905 00:02:51.626648 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:51.626749 kubelet[2274]: I0905 00:02:51.626665 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:51.626749 kubelet[2274]: I0905 00:02:51.626680 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a7ed3bb080ee757ed0411b26a5bb48a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a7ed3bb080ee757ed0411b26a5bb48a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:51.626749 kubelet[2274]: I0905 00:02:51.626695 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:51.691887 kubelet[2274]: I0905 00:02:51.691849 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:02:51.692278 kubelet[2274]: E0905 00:02:51.692234 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 5 00:02:51.826468 kubelet[2274]: E0905 00:02:51.826370 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Sep 5 00:02:51.870426 containerd[1498]: time="2025-09-05T00:02:51.870373709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a7ed3bb080ee757ed0411b26a5bb48a,Namespace:kube-system,Attempt:0,}" Sep 5 00:02:51.886422 containerd[1498]: time="2025-09-05T00:02:51.886369251Z" level=info msg="connecting to shim 80d2cca322c2f5def3df327fb474299a6538420e3e9e84e240de93c77b8b47b7" address="unix:///run/containerd/s/068d2831beefa6cdb0f56736ecbc21e64e394be9e76d1fad43d386b8c26f94bb" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:02:51.888333 containerd[1498]: time="2025-09-05T00:02:51.888030509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:02:51.893011 containerd[1498]: time="2025-09-05T00:02:51.892985504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:02:51.910210 containerd[1498]: time="2025-09-05T00:02:51.910155050Z" level=info msg="connecting to shim 82d79a2d1361a12ac1d2f04c3042fa5abfa1e033bbe73d41ea4ed2f1282d6124" address="unix:///run/containerd/s/135307ca865c761a667b0215e437c24ecbd7b059c2f9983e6ccae27b433dfbf0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:02:51.910605 systemd[1]: Started cri-containerd-80d2cca322c2f5def3df327fb474299a6538420e3e9e84e240de93c77b8b47b7.scope - libcontainer container 80d2cca322c2f5def3df327fb474299a6538420e3e9e84e240de93c77b8b47b7. Sep 5 00:02:51.925941 containerd[1498]: time="2025-09-05T00:02:51.925903167Z" level=info msg="connecting to shim 69cb426ef833ac9a9853b198e4de1f601028c3f29e3225493a58c33854a33135" address="unix:///run/containerd/s/a7f7ba6b308a380a6479db0300e9b60c2a9c398a2d5e260f94df046f3db180db" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:02:51.939642 systemd[1]: Started cri-containerd-82d79a2d1361a12ac1d2f04c3042fa5abfa1e033bbe73d41ea4ed2f1282d6124.scope - libcontainer container 82d79a2d1361a12ac1d2f04c3042fa5abfa1e033bbe73d41ea4ed2f1282d6124. Sep 5 00:02:51.956611 systemd[1]: Started cri-containerd-69cb426ef833ac9a9853b198e4de1f601028c3f29e3225493a58c33854a33135.scope - libcontainer container 69cb426ef833ac9a9853b198e4de1f601028c3f29e3225493a58c33854a33135. Sep 5 00:02:51.960316 containerd[1498]: time="2025-09-05T00:02:51.960271758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a7ed3bb080ee757ed0411b26a5bb48a,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d2cca322c2f5def3df327fb474299a6538420e3e9e84e240de93c77b8b47b7\"" Sep 5 00:02:51.966155 containerd[1498]: time="2025-09-05T00:02:51.965728237Z" level=info msg="CreateContainer within sandbox \"80d2cca322c2f5def3df327fb474299a6538420e3e9e84e240de93c77b8b47b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:02:51.972465 containerd[1498]: time="2025-09-05T00:02:51.972411842Z" level=info msg="Container f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:02:51.981177 containerd[1498]: time="2025-09-05T00:02:51.981135476Z" level=info msg="CreateContainer within sandbox \"80d2cca322c2f5def3df327fb474299a6538420e3e9e84e240de93c77b8b47b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829\"" Sep 5 00:02:51.981822 containerd[1498]: time="2025-09-05T00:02:51.981779179Z" level=info msg="StartContainer for \"f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829\"" Sep 5 00:02:51.983048 containerd[1498]: time="2025-09-05T00:02:51.983022734Z" level=info msg="connecting to shim f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829" address="unix:///run/containerd/s/068d2831beefa6cdb0f56736ecbc21e64e394be9e76d1fad43d386b8c26f94bb" protocol=ttrpc version=3 Sep 5 00:02:51.991417 containerd[1498]: time="2025-09-05T00:02:51.991387024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d79a2d1361a12ac1d2f04c3042fa5abfa1e033bbe73d41ea4ed2f1282d6124\"" Sep 5 00:02:51.997037 containerd[1498]: time="2025-09-05T00:02:51.997008305Z" level=info msg="CreateContainer within sandbox \"82d79a2d1361a12ac1d2f04c3042fa5abfa1e033bbe73d41ea4ed2f1282d6124\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:02:51.997452 containerd[1498]: time="2025-09-05T00:02:51.997365291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"69cb426ef833ac9a9853b198e4de1f601028c3f29e3225493a58c33854a33135\"" Sep 5 00:02:52.001085 containerd[1498]: time="2025-09-05T00:02:52.001058399Z" level=info msg="CreateContainer within sandbox \"69cb426ef833ac9a9853b198e4de1f601028c3f29e3225493a58c33854a33135\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:02:52.006585 systemd[1]: Started cri-containerd-f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829.scope - libcontainer container f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829. Sep 5 00:02:52.011372 containerd[1498]: time="2025-09-05T00:02:52.010668737Z" level=info msg="Container 226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:02:52.012825 containerd[1498]: time="2025-09-05T00:02:52.012792136Z" level=info msg="Container a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:02:52.021261 containerd[1498]: time="2025-09-05T00:02:52.021211736Z" level=info msg="CreateContainer within sandbox \"82d79a2d1361a12ac1d2f04c3042fa5abfa1e033bbe73d41ea4ed2f1282d6124\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907\"" Sep 5 00:02:52.021816 containerd[1498]: time="2025-09-05T00:02:52.021792934Z" level=info msg="StartContainer for \"226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907\"" Sep 5 00:02:52.021989 containerd[1498]: time="2025-09-05T00:02:52.021960390Z" level=info msg="CreateContainer within sandbox \"69cb426ef833ac9a9853b198e4de1f601028c3f29e3225493a58c33854a33135\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d\"" Sep 5 00:02:52.022849 containerd[1498]: time="2025-09-05T00:02:52.022825051Z" level=info msg="connecting to shim 226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907" address="unix:///run/containerd/s/135307ca865c761a667b0215e437c24ecbd7b059c2f9983e6ccae27b433dfbf0" protocol=ttrpc version=3 Sep 5 00:02:52.024465 containerd[1498]: time="2025-09-05T00:02:52.023477885Z" level=info msg="StartContainer for \"a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d\"" Sep 5 00:02:52.024465 containerd[1498]: time="2025-09-05T00:02:52.024372648Z" level=info msg="connecting to shim a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d" address="unix:///run/containerd/s/a7f7ba6b308a380a6479db0300e9b60c2a9c398a2d5e260f94df046f3db180db" protocol=ttrpc version=3 Sep 5 00:02:52.047641 systemd[1]: Started cri-containerd-226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907.scope - libcontainer container 226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907. Sep 5 00:02:52.048793 systemd[1]: Started cri-containerd-a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d.scope - libcontainer container a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d. Sep 5 00:02:52.055814 containerd[1498]: time="2025-09-05T00:02:52.055778102Z" level=info msg="StartContainer for \"f29351886ee92b35e73c8bb3a3d8889c5abe6fad82ad9bda564590fdd3c29829\" returns successfully" Sep 5 00:02:52.095182 kubelet[2274]: I0905 00:02:52.095149 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:02:52.095775 kubelet[2274]: E0905 00:02:52.095690 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 5 00:02:52.099081 containerd[1498]: time="2025-09-05T00:02:52.099046651Z" level=info msg="StartContainer for \"226cc7663d46f75151d2b298416e5f9df3e70d87a6f85d70211e837ec8eec907\" returns successfully" Sep 5 00:02:52.101647 containerd[1498]: time="2025-09-05T00:02:52.101569002Z" level=info msg="StartContainer for \"a509002814fa6af0532a978bbb50358b8160da318baf473326148b7b7adf6b2d\" returns successfully" Sep 5 00:02:52.256597 kubelet[2274]: E0905 00:02:52.256502 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:52.260747 kubelet[2274]: E0905 00:02:52.260719 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:52.261505 kubelet[2274]: E0905 00:02:52.261371 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:52.897832 kubelet[2274]: I0905 00:02:52.897605 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:02:53.262054 kubelet[2274]: E0905 00:02:53.261884 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:53.262054 kubelet[2274]: E0905 00:02:53.261981 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:54.227885 kubelet[2274]: E0905 00:02:54.227852 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:02:54.265019 kubelet[2274]: E0905 00:02:54.264803 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:54.265019 kubelet[2274]: E0905 00:02:54.264900 2274 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:02:54.308567 kubelet[2274]: I0905 00:02:54.308533 2274 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:02:54.324396 kubelet[2274]: I0905 00:02:54.324366 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:54.335237 kubelet[2274]: E0905 00:02:54.335204 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:54.335488 kubelet[2274]: I0905 00:02:54.335369 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:54.339298 kubelet[2274]: E0905 00:02:54.339273 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:54.339464 kubelet[2274]: I0905 00:02:54.339373 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:54.344529 kubelet[2274]: E0905 00:02:54.344504 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:55.100630 kubelet[2274]: I0905 00:02:55.100464 2274 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:55.102573 kubelet[2274]: E0905 00:02:55.102546 2274 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:55.204707 kubelet[2274]: I0905 00:02:55.204672 2274 apiserver.go:52] "Watching apiserver" Sep 5 00:02:55.224391 kubelet[2274]: I0905 00:02:55.224317 2274 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:02:56.248100 systemd[1]: Reload requested from client PID 2559 ('systemctl') (unit session-7.scope)... Sep 5 00:02:56.248114 systemd[1]: Reloading... Sep 5 00:02:56.310468 zram_generator::config[2605]: No configuration found. Sep 5 00:02:56.452883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:02:56.552995 systemd[1]: Reloading finished in 304 ms. Sep 5 00:02:56.571390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:56.588172 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:02:56.588399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:56.588465 systemd[1]: kubelet.service: Consumed 1.283s CPU time, 127.4M memory peak. Sep 5 00:02:56.590120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:02:56.731654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:02:56.735497 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:02:56.781025 kubelet[2644]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:02:56.781025 kubelet[2644]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:02:56.781025 kubelet[2644]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:02:56.781489 kubelet[2644]: I0905 00:02:56.781435 2644 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:02:56.789872 kubelet[2644]: I0905 00:02:56.789836 2644 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:02:56.789872 kubelet[2644]: I0905 00:02:56.789862 2644 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:02:56.790083 kubelet[2644]: I0905 00:02:56.790051 2644 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:02:56.791320 kubelet[2644]: I0905 00:02:56.791290 2644 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:02:56.793508 kubelet[2644]: I0905 00:02:56.793480 2644 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:02:56.797090 kubelet[2644]: I0905 00:02:56.797042 2644 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:02:56.799795 kubelet[2644]: I0905 00:02:56.799762 2644 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:02:56.799996 kubelet[2644]: I0905 00:02:56.799974 2644 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:02:56.800156 kubelet[2644]: I0905 00:02:56.800008 2644 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:02:56.800227 kubelet[2644]: I0905 00:02:56.800185 2644 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:02:56.800227 kubelet[2644]: I0905 00:02:56.800195 2644 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:02:56.800272 kubelet[2644]: I0905 00:02:56.800234 2644 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:02:56.800387 kubelet[2644]: I0905 00:02:56.800374 2644 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:02:56.800418 kubelet[2644]: I0905 00:02:56.800397 2644 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:02:56.800870 kubelet[2644]: I0905 00:02:56.800829 2644 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:02:56.800870 kubelet[2644]: I0905 00:02:56.800856 2644 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:02:56.803522 kubelet[2644]: I0905 00:02:56.801623 2644 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 5 00:02:56.803522 kubelet[2644]: I0905 00:02:56.802189 2644 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:02:56.804926 kubelet[2644]: I0905 00:02:56.804892 2644 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:02:56.804982 kubelet[2644]: I0905 00:02:56.804946 2644 server.go:1289] "Started kubelet" Sep 5 00:02:56.805945 kubelet[2644]: I0905 00:02:56.805898 2644 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:02:56.806292 kubelet[2644]: I0905 00:02:56.806272 2644 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:02:56.806431 kubelet[2644]: I0905 00:02:56.806396 2644 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:02:56.806793 kubelet[2644]: I0905 00:02:56.806765 2644 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:02:56.807960 kubelet[2644]: I0905 00:02:56.807939 2644 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:02:56.814867 kubelet[2644]: I0905 00:02:56.814824 2644 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:02:56.816311 kubelet[2644]: E0905 00:02:56.816287 2644 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:02:56.816360 kubelet[2644]: I0905 00:02:56.816348 2644 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:02:56.817275 kubelet[2644]: I0905 00:02:56.817244 2644 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:02:56.817512 kubelet[2644]: I0905 00:02:56.817499 2644 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:02:56.819985 kubelet[2644]: I0905 00:02:56.819960 2644 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:02:56.820148 kubelet[2644]: I0905 00:02:56.820128 2644 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:02:56.821999 kubelet[2644]: E0905 00:02:56.821969 2644 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:02:56.822159 kubelet[2644]: I0905 00:02:56.822141 2644 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:02:56.828189 kubelet[2644]: I0905 00:02:56.828150 2644 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:02:56.829493 kubelet[2644]: I0905 00:02:56.829474 2644 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:02:56.829493 kubelet[2644]: I0905 00:02:56.829495 2644 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:02:56.829567 kubelet[2644]: I0905 00:02:56.829513 2644 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:02:56.829567 kubelet[2644]: I0905 00:02:56.829519 2644 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:02:56.829567 kubelet[2644]: E0905 00:02:56.829557 2644 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:02:56.859510 kubelet[2644]: I0905 00:02:56.859481 2644 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:02:56.859510 kubelet[2644]: I0905 00:02:56.859502 2644 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:02:56.859631 kubelet[2644]: I0905 00:02:56.859522 2644 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:02:56.859692 kubelet[2644]: I0905 00:02:56.859675 2644 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:02:56.859719 kubelet[2644]: I0905 00:02:56.859690 2644 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:02:56.859719 kubelet[2644]: I0905 00:02:56.859707 2644 policy_none.go:49] "None policy: Start" Sep 5 00:02:56.859719 kubelet[2644]: I0905 00:02:56.859715 2644 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:02:56.859774 kubelet[2644]: I0905 00:02:56.859723 2644 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:02:56.859811 kubelet[2644]: I0905 00:02:56.859800 2644 state_mem.go:75] "Updated machine memory state" Sep 5 00:02:56.863989 kubelet[2644]: E0905 00:02:56.863417 2644 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:02:56.863989 kubelet[2644]: I0905 00:02:56.863599 2644 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:02:56.863989 kubelet[2644]: I0905 00:02:56.863611 2644 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:02:56.864107 kubelet[2644]: I0905 00:02:56.864025 2644 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:02:56.864567 kubelet[2644]: E0905 00:02:56.864541 2644 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:02:56.931047 kubelet[2644]: I0905 00:02:56.931012 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:56.931406 kubelet[2644]: I0905 00:02:56.931369 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:56.932138 kubelet[2644]: I0905 00:02:56.932095 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:56.966891 kubelet[2644]: I0905 00:02:56.966867 2644 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:02:56.972720 kubelet[2644]: I0905 00:02:56.972698 2644 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:02:56.972786 kubelet[2644]: I0905 00:02:56.972762 2644 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:02:57.018793 kubelet[2644]: I0905 00:02:57.018754 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:57.019045 kubelet[2644]: I0905 00:02:57.018924 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:57.019045 kubelet[2644]: I0905 00:02:57.018951 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:57.019182 kubelet[2644]: I0905 00:02:57.019135 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:57.019182 kubelet[2644]: I0905 00:02:57.019160 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a7ed3bb080ee757ed0411b26a5bb48a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a7ed3bb080ee757ed0411b26a5bb48a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:57.019249 kubelet[2644]: I0905 00:02:57.019199 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a7ed3bb080ee757ed0411b26a5bb48a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a7ed3bb080ee757ed0411b26a5bb48a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:57.019271 kubelet[2644]: I0905 00:02:57.019246 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:57.019290 kubelet[2644]: I0905 00:02:57.019267 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:02:57.019290 kubelet[2644]: I0905 00:02:57.019285 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a7ed3bb080ee757ed0411b26a5bb48a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a7ed3bb080ee757ed0411b26a5bb48a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:02:57.802007 kubelet[2644]: I0905 00:02:57.801612 2644 apiserver.go:52] "Watching apiserver" Sep 5 00:02:57.817975 kubelet[2644]: I0905 00:02:57.817939 2644 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:02:57.846922 kubelet[2644]: I0905 00:02:57.846856 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:57.852232 kubelet[2644]: E0905 00:02:57.852204 2644 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:02:57.876597 kubelet[2644]: I0905 00:02:57.876488 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.875527328 podStartE2EDuration="1.875527328s" podCreationTimestamp="2025-09-05 00:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:02:57.869466542 +0000 UTC m=+1.128414127" watchObservedRunningTime="2025-09-05 00:02:57.875527328 +0000 UTC m=+1.134474913" Sep 5 00:02:57.892140 kubelet[2644]: I0905 00:02:57.891828 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.89181457 podStartE2EDuration="1.89181457s" podCreationTimestamp="2025-09-05 00:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:02:57.881481788 +0000 UTC m=+1.140429373" watchObservedRunningTime="2025-09-05 00:02:57.89181457 +0000 UTC m=+1.150762155" Sep 5 00:02:57.892140 kubelet[2644]: I0905 00:02:57.891907 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.891903542 podStartE2EDuration="1.891903542s" podCreationTimestamp="2025-09-05 00:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:02:57.891884508 +0000 UTC m=+1.150832093" watchObservedRunningTime="2025-09-05 00:02:57.891903542 +0000 UTC m=+1.150851127" Sep 5 00:03:03.626583 kubelet[2644]: I0905 00:03:03.626537 2644 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:03:03.627067 containerd[1498]: time="2025-09-05T00:03:03.626910975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:03:03.627320 kubelet[2644]: I0905 00:03:03.627129 2644 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:03:04.737521 systemd[1]: Created slice kubepods-besteffort-pod9dcf2462_62aa_4e1d_b3d1_699927320ced.slice - libcontainer container kubepods-besteffort-pod9dcf2462_62aa_4e1d_b3d1_699927320ced.slice. Sep 5 00:03:04.775861 kubelet[2644]: I0905 00:03:04.775821 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dcf2462-62aa-4e1d-b3d1-699927320ced-xtables-lock\") pod \"kube-proxy-wzdvw\" (UID: \"9dcf2462-62aa-4e1d-b3d1-699927320ced\") " pod="kube-system/kube-proxy-wzdvw" Sep 5 00:03:04.775861 kubelet[2644]: I0905 00:03:04.775865 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dcf2462-62aa-4e1d-b3d1-699927320ced-lib-modules\") pod \"kube-proxy-wzdvw\" (UID: \"9dcf2462-62aa-4e1d-b3d1-699927320ced\") " pod="kube-system/kube-proxy-wzdvw" Sep 5 00:03:04.776228 kubelet[2644]: I0905 00:03:04.775922 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqr9\" (UniqueName: \"kubernetes.io/projected/9dcf2462-62aa-4e1d-b3d1-699927320ced-kube-api-access-dnqr9\") pod \"kube-proxy-wzdvw\" (UID: \"9dcf2462-62aa-4e1d-b3d1-699927320ced\") " pod="kube-system/kube-proxy-wzdvw" Sep 5 00:03:04.776228 kubelet[2644]: I0905 00:03:04.775955 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9dcf2462-62aa-4e1d-b3d1-699927320ced-kube-proxy\") pod \"kube-proxy-wzdvw\" (UID: \"9dcf2462-62aa-4e1d-b3d1-699927320ced\") " pod="kube-system/kube-proxy-wzdvw" Sep 5 00:03:04.902682 systemd[1]: Created slice kubepods-besteffort-pod67517df5_4e98_42f0_8f42_6609255e837a.slice - libcontainer container kubepods-besteffort-pod67517df5_4e98_42f0_8f42_6609255e837a.slice. Sep 5 00:03:04.979119 kubelet[2644]: I0905 00:03:04.979048 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67517df5-4e98-42f0-8f42-6609255e837a-var-lib-calico\") pod \"tigera-operator-755d956888-bkv89\" (UID: \"67517df5-4e98-42f0-8f42-6609255e837a\") " pod="tigera-operator/tigera-operator-755d956888-bkv89" Sep 5 00:03:04.979119 kubelet[2644]: I0905 00:03:04.979093 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prt9m\" (UniqueName: \"kubernetes.io/projected/67517df5-4e98-42f0-8f42-6609255e837a-kube-api-access-prt9m\") pod \"tigera-operator-755d956888-bkv89\" (UID: \"67517df5-4e98-42f0-8f42-6609255e837a\") " pod="tigera-operator/tigera-operator-755d956888-bkv89" Sep 5 00:03:05.052874 containerd[1498]: time="2025-09-05T00:03:05.052765334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzdvw,Uid:9dcf2462-62aa-4e1d-b3d1-699927320ced,Namespace:kube-system,Attempt:0,}" Sep 5 00:03:05.069381 containerd[1498]: time="2025-09-05T00:03:05.069307516Z" level=info msg="connecting to shim b733f3797234026720bb29887fd30289d41e101d81410da2e0f9aea930e9faaa" address="unix:///run/containerd/s/ac0d6da4feeff7a4d449603dd8279aaba4bbfd652b90d05a7f9f68fb85921040" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:05.098611 systemd[1]: Started cri-containerd-b733f3797234026720bb29887fd30289d41e101d81410da2e0f9aea930e9faaa.scope - libcontainer container b733f3797234026720bb29887fd30289d41e101d81410da2e0f9aea930e9faaa. Sep 5 00:03:05.122519 containerd[1498]: time="2025-09-05T00:03:05.122484110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzdvw,Uid:9dcf2462-62aa-4e1d-b3d1-699927320ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"b733f3797234026720bb29887fd30289d41e101d81410da2e0f9aea930e9faaa\"" Sep 5 00:03:05.127171 containerd[1498]: time="2025-09-05T00:03:05.127137375Z" level=info msg="CreateContainer within sandbox \"b733f3797234026720bb29887fd30289d41e101d81410da2e0f9aea930e9faaa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:03:05.162180 containerd[1498]: time="2025-09-05T00:03:05.162140780Z" level=info msg="Container 73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:05.176530 containerd[1498]: time="2025-09-05T00:03:05.176464247Z" level=info msg="CreateContainer within sandbox \"b733f3797234026720bb29887fd30289d41e101d81410da2e0f9aea930e9faaa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a\"" Sep 5 00:03:05.177133 containerd[1498]: time="2025-09-05T00:03:05.177105954Z" level=info msg="StartContainer for \"73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a\"" Sep 5 00:03:05.178413 containerd[1498]: time="2025-09-05T00:03:05.178367848Z" level=info msg="connecting to shim 73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a" address="unix:///run/containerd/s/ac0d6da4feeff7a4d449603dd8279aaba4bbfd652b90d05a7f9f68fb85921040" protocol=ttrpc version=3 Sep 5 00:03:05.195667 systemd[1]: Started cri-containerd-73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a.scope - libcontainer container 73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a. Sep 5 00:03:05.207841 containerd[1498]: time="2025-09-05T00:03:05.206502314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bkv89,Uid:67517df5-4e98-42f0-8f42-6609255e837a,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:03:05.227840 containerd[1498]: time="2025-09-05T00:03:05.227796999Z" level=info msg="StartContainer for \"73ddf67de10a0c74e8061c949daf876353863d07198c5934dcac98e31bc32c5a\" returns successfully" Sep 5 00:03:05.228509 containerd[1498]: time="2025-09-05T00:03:05.228348147Z" level=info msg="connecting to shim 3a919526d7447dd66307f476443e59667009d9983b71d9c7f94f3909426f4a40" address="unix:///run/containerd/s/2e95771475cb5b4532d3933face7979d47a3b821681ff3c091882c19d77d3b09" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:05.251627 systemd[1]: Started cri-containerd-3a919526d7447dd66307f476443e59667009d9983b71d9c7f94f3909426f4a40.scope - libcontainer container 3a919526d7447dd66307f476443e59667009d9983b71d9c7f94f3909426f4a40. Sep 5 00:03:05.287150 containerd[1498]: time="2025-09-05T00:03:05.287098347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bkv89,Uid:67517df5-4e98-42f0-8f42-6609255e837a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a919526d7447dd66307f476443e59667009d9983b71d9c7f94f3909426f4a40\"" Sep 5 00:03:05.290485 containerd[1498]: time="2025-09-05T00:03:05.290157205Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:03:05.873099 kubelet[2644]: I0905 00:03:05.873035 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzdvw" podStartSLOduration=1.87301978 podStartE2EDuration="1.87301978s" podCreationTimestamp="2025-09-05 00:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:03:05.872907662 +0000 UTC m=+9.131855247" watchObservedRunningTime="2025-09-05 00:03:05.87301978 +0000 UTC m=+9.131967325" Sep 5 00:03:05.895332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730624625.mount: Deactivated successfully. Sep 5 00:03:06.384769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494301488.mount: Deactivated successfully. Sep 5 00:03:08.193693 containerd[1498]: time="2025-09-05T00:03:08.193644134Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:08.194553 containerd[1498]: time="2025-09-05T00:03:08.194455279Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 00:03:08.195243 containerd[1498]: time="2025-09-05T00:03:08.195208866Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:08.197405 containerd[1498]: time="2025-09-05T00:03:08.197359069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:08.198035 containerd[1498]: time="2025-09-05T00:03:08.198002138Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.907780134s" Sep 5 00:03:08.198092 containerd[1498]: time="2025-09-05T00:03:08.198035217Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 00:03:08.209468 containerd[1498]: time="2025-09-05T00:03:08.209182343Z" level=info msg="CreateContainer within sandbox \"3a919526d7447dd66307f476443e59667009d9983b71d9c7f94f3909426f4a40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:03:08.240480 containerd[1498]: time="2025-09-05T00:03:08.240238043Z" level=info msg="Container 40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:08.245019 containerd[1498]: time="2025-09-05T00:03:08.244987321Z" level=info msg="CreateContainer within sandbox \"3a919526d7447dd66307f476443e59667009d9983b71d9c7f94f3909426f4a40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96\"" Sep 5 00:03:08.245950 containerd[1498]: time="2025-09-05T00:03:08.245790987Z" level=info msg="StartContainer for \"40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96\"" Sep 5 00:03:08.247068 containerd[1498]: time="2025-09-05T00:03:08.247042325Z" level=info msg="connecting to shim 40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96" address="unix:///run/containerd/s/2e95771475cb5b4532d3933face7979d47a3b821681ff3c091882c19d77d3b09" protocol=ttrpc version=3 Sep 5 00:03:08.267591 systemd[1]: Started cri-containerd-40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96.scope - libcontainer container 40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96. Sep 5 00:03:08.313326 containerd[1498]: time="2025-09-05T00:03:08.313264093Z" level=info msg="StartContainer for \"40f66f98d5ede3da4c8596a20814ae1ff174d66d016d39954381fc009e36ef96\" returns successfully" Sep 5 00:03:12.548939 update_engine[1486]: I20250905 00:03:12.548877 1486 update_attempter.cc:509] Updating boot flags... Sep 5 00:03:13.573864 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 5 00:03:13.577568 sshd[1705]: Connection closed by 10.0.0.1 port 33288 Sep 5 00:03:13.578924 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:13.585034 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:33288.service: Deactivated successfully. Sep 5 00:03:13.589291 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:03:13.589647 systemd[1]: session-7.scope: Consumed 6.969s CPU time, 224.7M memory peak. Sep 5 00:03:13.591041 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:03:13.592671 systemd-logind[1480]: Removed session 7. Sep 5 00:03:18.267468 kubelet[2644]: I0905 00:03:18.267382 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-bkv89" podStartSLOduration=11.350441583 podStartE2EDuration="14.267364033s" podCreationTimestamp="2025-09-05 00:03:04 +0000 UTC" firstStartedPulling="2025-09-05 00:03:05.288578957 +0000 UTC m=+8.547526542" lastFinishedPulling="2025-09-05 00:03:08.205501407 +0000 UTC m=+11.464448992" observedRunningTime="2025-09-05 00:03:08.892699898 +0000 UTC m=+12.151647443" watchObservedRunningTime="2025-09-05 00:03:18.267364033 +0000 UTC m=+21.526311618" Sep 5 00:03:18.292880 systemd[1]: Created slice kubepods-besteffort-pod68c995ae_d464_4655_a358_c336d0308e74.slice - libcontainer container kubepods-besteffort-pod68c995ae_d464_4655_a358_c336d0308e74.slice. Sep 5 00:03:18.369318 kubelet[2644]: I0905 00:03:18.369271 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/68c995ae-d464-4655-a358-c336d0308e74-typha-certs\") pod \"calico-typha-68d58db98c-qgp82\" (UID: \"68c995ae-d464-4655-a358-c336d0308e74\") " pod="calico-system/calico-typha-68d58db98c-qgp82" Sep 5 00:03:18.369605 kubelet[2644]: I0905 00:03:18.369517 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68c995ae-d464-4655-a358-c336d0308e74-tigera-ca-bundle\") pod \"calico-typha-68d58db98c-qgp82\" (UID: \"68c995ae-d464-4655-a358-c336d0308e74\") " pod="calico-system/calico-typha-68d58db98c-qgp82" Sep 5 00:03:18.369605 kubelet[2644]: I0905 00:03:18.369563 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2q68\" (UniqueName: \"kubernetes.io/projected/68c995ae-d464-4655-a358-c336d0308e74-kube-api-access-p2q68\") pod \"calico-typha-68d58db98c-qgp82\" (UID: \"68c995ae-d464-4655-a358-c336d0308e74\") " pod="calico-system/calico-typha-68d58db98c-qgp82" Sep 5 00:03:18.557804 systemd[1]: Created slice kubepods-besteffort-pod428dc868_24ec_437d_a276_140899d7192e.slice - libcontainer container kubepods-besteffort-pod428dc868_24ec_437d_a276_140899d7192e.slice. Sep 5 00:03:18.570696 kubelet[2644]: I0905 00:03:18.570652 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/428dc868-24ec-437d-a276-140899d7192e-node-certs\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.570696 kubelet[2644]: I0905 00:03:18.570706 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfcd\" (UniqueName: \"kubernetes.io/projected/428dc868-24ec-437d-a276-140899d7192e-kube-api-access-npfcd\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.570887 kubelet[2644]: I0905 00:03:18.570840 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-cni-log-dir\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.575868 kubelet[2644]: I0905 00:03:18.571508 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-policysync\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.575980 kubelet[2644]: I0905 00:03:18.575912 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-cni-bin-dir\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.575980 kubelet[2644]: I0905 00:03:18.575932 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-var-lib-calico\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.575980 kubelet[2644]: I0905 00:03:18.575954 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428dc868-24ec-437d-a276-140899d7192e-tigera-ca-bundle\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.575980 kubelet[2644]: I0905 00:03:18.575969 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-xtables-lock\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.576073 kubelet[2644]: I0905 00:03:18.575987 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-cni-net-dir\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.576073 kubelet[2644]: I0905 00:03:18.576003 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-flexvol-driver-host\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.576073 kubelet[2644]: I0905 00:03:18.576020 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-lib-modules\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.576073 kubelet[2644]: I0905 00:03:18.576035 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/428dc868-24ec-437d-a276-140899d7192e-var-run-calico\") pod \"calico-node-9zdpt\" (UID: \"428dc868-24ec-437d-a276-140899d7192e\") " pod="calico-system/calico-node-9zdpt" Sep 5 00:03:18.597565 containerd[1498]: time="2025-09-05T00:03:18.597513644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68d58db98c-qgp82,Uid:68c995ae-d464-4655-a358-c336d0308e74,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:18.643467 containerd[1498]: time="2025-09-05T00:03:18.643389836Z" level=info msg="connecting to shim ae889a84e39283bea2f9e00969f13a0fe5f99210af5819f851ff8a7e2ef5eb99" address="unix:///run/containerd/s/149426915ed8351c0f1f0ec553be91e5acae75643d0e2cdf600f000ea9522c16" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:18.669655 systemd[1]: Started cri-containerd-ae889a84e39283bea2f9e00969f13a0fe5f99210af5819f851ff8a7e2ef5eb99.scope - libcontainer container ae889a84e39283bea2f9e00969f13a0fe5f99210af5819f851ff8a7e2ef5eb99. Sep 5 00:03:18.678293 kubelet[2644]: E0905 00:03:18.678220 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.678293 kubelet[2644]: W0905 00:03:18.678245 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.682764 kubelet[2644]: E0905 00:03:18.682708 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683010 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.685345 kubelet[2644]: W0905 00:03:18.683029 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683048 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683220 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.685345 kubelet[2644]: W0905 00:03:18.683229 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683239 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683418 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.685345 kubelet[2644]: W0905 00:03:18.683427 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683448 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.685345 kubelet[2644]: E0905 00:03:18.683666 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.685655 kubelet[2644]: W0905 00:03:18.683676 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.685655 kubelet[2644]: E0905 00:03:18.683687 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.687535 kubelet[2644]: E0905 00:03:18.686587 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.687854 kubelet[2644]: W0905 00:03:18.687798 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.688379 kubelet[2644]: E0905 00:03:18.687830 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.700423 kubelet[2644]: E0905 00:03:18.700381 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.700423 kubelet[2644]: W0905 00:03:18.700407 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.700423 kubelet[2644]: E0905 00:03:18.700430 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.766419 containerd[1498]: time="2025-09-05T00:03:18.766368769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68d58db98c-qgp82,Uid:68c995ae-d464-4655-a358-c336d0308e74,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae889a84e39283bea2f9e00969f13a0fe5f99210af5819f851ff8a7e2ef5eb99\"" Sep 5 00:03:18.768284 containerd[1498]: time="2025-09-05T00:03:18.768247189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:03:18.843345 kubelet[2644]: E0905 00:03:18.842494 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fs25" podUID="6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580" Sep 5 00:03:18.863471 kubelet[2644]: E0905 00:03:18.863392 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.863471 kubelet[2644]: W0905 00:03:18.863466 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.863637 kubelet[2644]: E0905 00:03:18.863502 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.864233 kubelet[2644]: E0905 00:03:18.863666 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.864319 containerd[1498]: time="2025-09-05T00:03:18.864058650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zdpt,Uid:428dc868-24ec-437d-a276-140899d7192e,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:18.866268 kubelet[2644]: W0905 00:03:18.863679 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.866268 kubelet[2644]: E0905 00:03:18.866271 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.866922 kubelet[2644]: E0905 00:03:18.866632 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.866922 kubelet[2644]: W0905 00:03:18.866650 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.866922 kubelet[2644]: E0905 00:03:18.866664 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.866922 kubelet[2644]: E0905 00:03:18.866881 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.866922 kubelet[2644]: W0905 00:03:18.866891 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.866922 kubelet[2644]: E0905 00:03:18.866899 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.867116 kubelet[2644]: E0905 00:03:18.867060 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.867116 kubelet[2644]: W0905 00:03:18.867083 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.867116 kubelet[2644]: E0905 00:03:18.867093 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867267 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.867986 kubelet[2644]: W0905 00:03:18.867282 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867293 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867495 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.867986 kubelet[2644]: W0905 00:03:18.867505 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867514 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867672 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.867986 kubelet[2644]: W0905 00:03:18.867680 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867695 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.867986 kubelet[2644]: E0905 00:03:18.867870 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.868229 kubelet[2644]: W0905 00:03:18.867881 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.868229 kubelet[2644]: E0905 00:03:18.867902 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.868229 kubelet[2644]: E0905 00:03:18.868055 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.868229 kubelet[2644]: W0905 00:03:18.868064 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.868229 kubelet[2644]: E0905 00:03:18.868073 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.868229 kubelet[2644]: E0905 00:03:18.868201 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.868229 kubelet[2644]: W0905 00:03:18.868209 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.868229 kubelet[2644]: E0905 00:03:18.868217 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.868484 kubelet[2644]: E0905 00:03:18.868468 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.868484 kubelet[2644]: W0905 00:03:18.868481 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.868537 kubelet[2644]: E0905 00:03:18.868491 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.868713 kubelet[2644]: E0905 00:03:18.868699 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.868735 kubelet[2644]: W0905 00:03:18.868712 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.868735 kubelet[2644]: E0905 00:03:18.868722 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.868889 kubelet[2644]: E0905 00:03:18.868876 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.868889 kubelet[2644]: W0905 00:03:18.868888 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.868944 kubelet[2644]: E0905 00:03:18.868896 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.869039 kubelet[2644]: E0905 00:03:18.869027 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.869039 kubelet[2644]: W0905 00:03:18.869037 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.869097 kubelet[2644]: E0905 00:03:18.869044 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.869338 kubelet[2644]: E0905 00:03:18.869319 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.869338 kubelet[2644]: W0905 00:03:18.869337 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.869402 kubelet[2644]: E0905 00:03:18.869347 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.869660 kubelet[2644]: E0905 00:03:18.869609 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.869700 kubelet[2644]: W0905 00:03:18.869659 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.869700 kubelet[2644]: E0905 00:03:18.869670 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.869865 kubelet[2644]: E0905 00:03:18.869850 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.869865 kubelet[2644]: W0905 00:03:18.869863 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.869924 kubelet[2644]: E0905 00:03:18.869873 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.870024 kubelet[2644]: E0905 00:03:18.870012 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.870024 kubelet[2644]: W0905 00:03:18.870023 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.870065 kubelet[2644]: E0905 00:03:18.870031 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.870178 kubelet[2644]: E0905 00:03:18.870165 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.870178 kubelet[2644]: W0905 00:03:18.870176 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.870282 kubelet[2644]: E0905 00:03:18.870183 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.877970 kubelet[2644]: E0905 00:03:18.877943 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.877970 kubelet[2644]: W0905 00:03:18.877965 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.878069 kubelet[2644]: E0905 00:03:18.877982 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.878532 kubelet[2644]: E0905 00:03:18.878512 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.878592 kubelet[2644]: W0905 00:03:18.878532 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.878592 kubelet[2644]: E0905 00:03:18.878548 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.878894 kubelet[2644]: I0905 00:03:18.878871 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580-socket-dir\") pod \"csi-node-driver-4fs25\" (UID: \"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580\") " pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:18.878953 kubelet[2644]: E0905 00:03:18.878941 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.878979 kubelet[2644]: W0905 00:03:18.878954 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.878979 kubelet[2644]: E0905 00:03:18.878966 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.879502 kubelet[2644]: E0905 00:03:18.879398 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.879502 kubelet[2644]: W0905 00:03:18.879413 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.879502 kubelet[2644]: E0905 00:03:18.879468 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.879606 kubelet[2644]: I0905 00:03:18.879510 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtvg\" (UniqueName: \"kubernetes.io/projected/6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580-kube-api-access-7qtvg\") pod \"csi-node-driver-4fs25\" (UID: \"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580\") " pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:18.879794 kubelet[2644]: E0905 00:03:18.879779 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.879821 kubelet[2644]: W0905 00:03:18.879794 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.879821 kubelet[2644]: E0905 00:03:18.879805 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.879875 kubelet[2644]: I0905 00:03:18.879831 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580-kubelet-dir\") pod \"csi-node-driver-4fs25\" (UID: \"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580\") " pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:18.880146 kubelet[2644]: E0905 00:03:18.880123 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.880146 kubelet[2644]: W0905 00:03:18.880141 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.880192 kubelet[2644]: E0905 00:03:18.880152 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.880192 kubelet[2644]: I0905 00:03:18.880176 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580-registration-dir\") pod \"csi-node-driver-4fs25\" (UID: \"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580\") " pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:18.880522 kubelet[2644]: E0905 00:03:18.880505 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.880559 kubelet[2644]: W0905 00:03:18.880521 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.880559 kubelet[2644]: E0905 00:03:18.880532 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.880797 kubelet[2644]: E0905 00:03:18.880782 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.880826 kubelet[2644]: W0905 00:03:18.880797 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.880826 kubelet[2644]: E0905 00:03:18.880808 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.881020 kubelet[2644]: E0905 00:03:18.881005 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.881050 kubelet[2644]: W0905 00:03:18.881020 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.881050 kubelet[2644]: E0905 00:03:18.881030 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.881281 kubelet[2644]: E0905 00:03:18.881265 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.881305 kubelet[2644]: W0905 00:03:18.881280 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.881305 kubelet[2644]: E0905 00:03:18.881290 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.882008 kubelet[2644]: E0905 00:03:18.881988 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.882056 kubelet[2644]: W0905 00:03:18.882042 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.882087 kubelet[2644]: E0905 00:03:18.882058 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.882108 kubelet[2644]: I0905 00:03:18.882091 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580-varrun\") pod \"csi-node-driver-4fs25\" (UID: \"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580\") " pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:18.882491 kubelet[2644]: E0905 00:03:18.882468 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.882491 kubelet[2644]: W0905 00:03:18.882487 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.882657 kubelet[2644]: E0905 00:03:18.882500 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.882738 kubelet[2644]: E0905 00:03:18.882723 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.882738 kubelet[2644]: W0905 00:03:18.882736 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.882779 kubelet[2644]: E0905 00:03:18.882745 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.882934 kubelet[2644]: E0905 00:03:18.882920 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.882934 kubelet[2644]: W0905 00:03:18.882932 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.882987 kubelet[2644]: E0905 00:03:18.882940 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.883100 kubelet[2644]: E0905 00:03:18.883085 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.883100 kubelet[2644]: W0905 00:03:18.883097 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.883164 kubelet[2644]: E0905 00:03:18.883105 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.892135 containerd[1498]: time="2025-09-05T00:03:18.892093792Z" level=info msg="connecting to shim 3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1" address="unix:///run/containerd/s/3cb5c4473c99da6e5dae380f843f9bb4e25acab4a5d637d10a1321fb67e707f3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:18.916625 systemd[1]: Started cri-containerd-3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1.scope - libcontainer container 3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1. Sep 5 00:03:18.940074 containerd[1498]: time="2025-09-05T00:03:18.940031603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zdpt,Uid:428dc868-24ec-437d-a276-140899d7192e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\"" Sep 5 00:03:18.983668 kubelet[2644]: E0905 00:03:18.983629 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.983668 kubelet[2644]: W0905 00:03:18.983657 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.983668 kubelet[2644]: E0905 00:03:18.983677 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.984014 kubelet[2644]: E0905 00:03:18.983995 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.984014 kubelet[2644]: W0905 00:03:18.984013 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.984064 kubelet[2644]: E0905 00:03:18.984025 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.984280 kubelet[2644]: E0905 00:03:18.984262 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.984280 kubelet[2644]: W0905 00:03:18.984276 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.984347 kubelet[2644]: E0905 00:03:18.984288 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.984536 kubelet[2644]: E0905 00:03:18.984520 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.984572 kubelet[2644]: W0905 00:03:18.984536 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.984572 kubelet[2644]: E0905 00:03:18.984547 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.984789 kubelet[2644]: E0905 00:03:18.984769 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.984815 kubelet[2644]: W0905 00:03:18.984790 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.984815 kubelet[2644]: E0905 00:03:18.984805 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.984999 kubelet[2644]: E0905 00:03:18.984981 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.984999 kubelet[2644]: W0905 00:03:18.984997 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.985074 kubelet[2644]: E0905 00:03:18.985006 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.985160 kubelet[2644]: E0905 00:03:18.985146 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.985194 kubelet[2644]: W0905 00:03:18.985172 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.985194 kubelet[2644]: E0905 00:03:18.985181 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.985380 kubelet[2644]: E0905 00:03:18.985368 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.985380 kubelet[2644]: W0905 00:03:18.985379 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.985424 kubelet[2644]: E0905 00:03:18.985387 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.985567 kubelet[2644]: E0905 00:03:18.985553 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.985567 kubelet[2644]: W0905 00:03:18.985565 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.985626 kubelet[2644]: E0905 00:03:18.985573 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.986641 kubelet[2644]: E0905 00:03:18.986606 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.986641 kubelet[2644]: W0905 00:03:18.986626 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.986641 kubelet[2644]: E0905 00:03:18.986638 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.986970 kubelet[2644]: E0905 00:03:18.986944 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.986970 kubelet[2644]: W0905 00:03:18.986960 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.986970 kubelet[2644]: E0905 00:03:18.986972 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.987457 kubelet[2644]: E0905 00:03:18.987327 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.987457 kubelet[2644]: W0905 00:03:18.987344 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.987457 kubelet[2644]: E0905 00:03:18.987356 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.987625 kubelet[2644]: E0905 00:03:18.987554 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.987625 kubelet[2644]: W0905 00:03:18.987566 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.987625 kubelet[2644]: E0905 00:03:18.987575 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.987855 kubelet[2644]: E0905 00:03:18.987718 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.987855 kubelet[2644]: W0905 00:03:18.987731 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.987855 kubelet[2644]: E0905 00:03:18.987739 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.988506 kubelet[2644]: E0905 00:03:18.988465 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.988570 kubelet[2644]: W0905 00:03:18.988515 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.988570 kubelet[2644]: E0905 00:03:18.988529 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.989243 kubelet[2644]: E0905 00:03:18.989181 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.989243 kubelet[2644]: W0905 00:03:18.989200 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.989243 kubelet[2644]: E0905 00:03:18.989212 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.989559 kubelet[2644]: E0905 00:03:18.989422 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.989559 kubelet[2644]: W0905 00:03:18.989434 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.989559 kubelet[2644]: E0905 00:03:18.989481 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.992543 kubelet[2644]: E0905 00:03:18.992521 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.992543 kubelet[2644]: W0905 00:03:18.992540 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.992640 kubelet[2644]: E0905 00:03:18.992554 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.992782 kubelet[2644]: E0905 00:03:18.992769 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.992782 kubelet[2644]: W0905 00:03:18.992781 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.992841 kubelet[2644]: E0905 00:03:18.992791 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.992952 kubelet[2644]: E0905 00:03:18.992930 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.992952 kubelet[2644]: W0905 00:03:18.992942 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.992952 kubelet[2644]: E0905 00:03:18.992951 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.993414 kubelet[2644]: E0905 00:03:18.993397 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.993414 kubelet[2644]: W0905 00:03:18.993412 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.993529 kubelet[2644]: E0905 00:03:18.993424 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.993651 kubelet[2644]: E0905 00:03:18.993635 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.993651 kubelet[2644]: W0905 00:03:18.993648 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.993696 kubelet[2644]: E0905 00:03:18.993657 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.994519 kubelet[2644]: E0905 00:03:18.994496 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.994519 kubelet[2644]: W0905 00:03:18.994515 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.994602 kubelet[2644]: E0905 00:03:18.994529 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.994812 kubelet[2644]: E0905 00:03:18.994787 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.994812 kubelet[2644]: W0905 00:03:18.994808 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.994862 kubelet[2644]: E0905 00:03:18.994821 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:18.995192 kubelet[2644]: E0905 00:03:18.995175 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:18.995192 kubelet[2644]: W0905 00:03:18.995189 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:18.995270 kubelet[2644]: E0905 00:03:18.995200 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:19.000352 kubelet[2644]: E0905 00:03:19.000314 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:19.000352 kubelet[2644]: W0905 00:03:19.000334 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:19.000352 kubelet[2644]: E0905 00:03:19.000351 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:19.795398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867292935.mount: Deactivated successfully. Sep 5 00:03:20.400349 containerd[1498]: time="2025-09-05T00:03:20.400295946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:20.400778 containerd[1498]: time="2025-09-05T00:03:20.400688582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 00:03:20.401536 containerd[1498]: time="2025-09-05T00:03:20.401494294Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:20.403368 containerd[1498]: time="2025-09-05T00:03:20.403328756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:20.404088 containerd[1498]: time="2025-09-05T00:03:20.404052989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.635762561s" Sep 5 00:03:20.404125 containerd[1498]: time="2025-09-05T00:03:20.404090109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 00:03:20.405043 containerd[1498]: time="2025-09-05T00:03:20.405011860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:03:20.416954 containerd[1498]: time="2025-09-05T00:03:20.416915104Z" level=info msg="CreateContainer within sandbox \"ae889a84e39283bea2f9e00969f13a0fe5f99210af5819f851ff8a7e2ef5eb99\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:03:20.461764 containerd[1498]: time="2025-09-05T00:03:20.461494270Z" level=info msg="Container dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:20.472468 containerd[1498]: time="2025-09-05T00:03:20.472399444Z" level=info msg="CreateContainer within sandbox \"ae889a84e39283bea2f9e00969f13a0fe5f99210af5819f851ff8a7e2ef5eb99\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b\"" Sep 5 00:03:20.474850 containerd[1498]: time="2025-09-05T00:03:20.474807461Z" level=info msg="StartContainer for \"dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b\"" Sep 5 00:03:20.476851 containerd[1498]: time="2025-09-05T00:03:20.476810362Z" level=info msg="connecting to shim dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b" address="unix:///run/containerd/s/149426915ed8351c0f1f0ec553be91e5acae75643d0e2cdf600f000ea9522c16" protocol=ttrpc version=3 Sep 5 00:03:20.498844 systemd[1]: Started cri-containerd-dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b.scope - libcontainer container dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b. Sep 5 00:03:20.612710 containerd[1498]: time="2025-09-05T00:03:20.612663360Z" level=info msg="StartContainer for \"dfbfb67728dd88d2576655bec1197f5e25d932cd8f969da907b7fad5d3f02b4b\" returns successfully" Sep 5 00:03:20.830459 kubelet[2644]: E0905 00:03:20.830390 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fs25" podUID="6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580" Sep 5 00:03:20.928459 kubelet[2644]: I0905 00:03:20.928381 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68d58db98c-qgp82" podStartSLOduration=1.291521858 podStartE2EDuration="2.928348369s" podCreationTimestamp="2025-09-05 00:03:18 +0000 UTC" firstStartedPulling="2025-09-05 00:03:18.767985871 +0000 UTC m=+22.026933416" lastFinishedPulling="2025-09-05 00:03:20.404812342 +0000 UTC m=+23.663759927" observedRunningTime="2025-09-05 00:03:20.928320609 +0000 UTC m=+24.187268194" watchObservedRunningTime="2025-09-05 00:03:20.928348369 +0000 UTC m=+24.187295954" Sep 5 00:03:20.982841 kubelet[2644]: E0905 00:03:20.982810 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.982841 kubelet[2644]: W0905 00:03:20.982835 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.982841 kubelet[2644]: E0905 00:03:20.982856 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983056 kubelet[2644]: E0905 00:03:20.983036 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983056 kubelet[2644]: W0905 00:03:20.983048 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983111 kubelet[2644]: E0905 00:03:20.983057 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983195 kubelet[2644]: E0905 00:03:20.983182 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983195 kubelet[2644]: W0905 00:03:20.983192 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983267 kubelet[2644]: E0905 00:03:20.983200 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983351 kubelet[2644]: E0905 00:03:20.983339 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983351 kubelet[2644]: W0905 00:03:20.983350 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983408 kubelet[2644]: E0905 00:03:20.983358 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983527 kubelet[2644]: E0905 00:03:20.983513 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983527 kubelet[2644]: W0905 00:03:20.983524 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983588 kubelet[2644]: E0905 00:03:20.983533 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983659 kubelet[2644]: E0905 00:03:20.983646 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983659 kubelet[2644]: W0905 00:03:20.983656 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983725 kubelet[2644]: E0905 00:03:20.983663 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983785 kubelet[2644]: E0905 00:03:20.983775 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983785 kubelet[2644]: W0905 00:03:20.983784 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983838 kubelet[2644]: E0905 00:03:20.983790 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.983918 kubelet[2644]: E0905 00:03:20.983906 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.983918 kubelet[2644]: W0905 00:03:20.983915 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.983983 kubelet[2644]: E0905 00:03:20.983922 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984061 kubelet[2644]: E0905 00:03:20.984049 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984061 kubelet[2644]: W0905 00:03:20.984060 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984117 kubelet[2644]: E0905 00:03:20.984076 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984202 kubelet[2644]: E0905 00:03:20.984191 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984202 kubelet[2644]: W0905 00:03:20.984201 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984272 kubelet[2644]: E0905 00:03:20.984208 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984338 kubelet[2644]: E0905 00:03:20.984327 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984338 kubelet[2644]: W0905 00:03:20.984337 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984395 kubelet[2644]: E0905 00:03:20.984344 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984484 kubelet[2644]: E0905 00:03:20.984473 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984484 kubelet[2644]: W0905 00:03:20.984483 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984548 kubelet[2644]: E0905 00:03:20.984491 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984624 kubelet[2644]: E0905 00:03:20.984613 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984624 kubelet[2644]: W0905 00:03:20.984623 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984684 kubelet[2644]: E0905 00:03:20.984630 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984763 kubelet[2644]: E0905 00:03:20.984752 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984763 kubelet[2644]: W0905 00:03:20.984761 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984812 kubelet[2644]: E0905 00:03:20.984768 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:20.984889 kubelet[2644]: E0905 00:03:20.984878 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:20.984889 kubelet[2644]: W0905 00:03:20.984887 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:20.984952 kubelet[2644]: E0905 00:03:20.984894 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.000870 kubelet[2644]: E0905 00:03:21.000731 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.000870 kubelet[2644]: W0905 00:03:21.000749 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.000870 kubelet[2644]: E0905 00:03:21.000765 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.001102 kubelet[2644]: E0905 00:03:21.001085 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.001203 kubelet[2644]: W0905 00:03:21.001145 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.001203 kubelet[2644]: E0905 00:03:21.001162 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.001415 kubelet[2644]: E0905 00:03:21.001399 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.001464 kubelet[2644]: W0905 00:03:21.001415 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.001464 kubelet[2644]: E0905 00:03:21.001429 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.001607 kubelet[2644]: E0905 00:03:21.001596 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.001607 kubelet[2644]: W0905 00:03:21.001607 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.001660 kubelet[2644]: E0905 00:03:21.001615 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.001768 kubelet[2644]: E0905 00:03:21.001756 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.001768 kubelet[2644]: W0905 00:03:21.001767 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.001826 kubelet[2644]: E0905 00:03:21.001774 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.001945 kubelet[2644]: E0905 00:03:21.001934 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.001945 kubelet[2644]: W0905 00:03:21.001944 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.002002 kubelet[2644]: E0905 00:03:21.001952 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.002252 kubelet[2644]: E0905 00:03:21.002237 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.002252 kubelet[2644]: W0905 00:03:21.002251 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.002309 kubelet[2644]: E0905 00:03:21.002260 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.002425 kubelet[2644]: E0905 00:03:21.002413 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.002425 kubelet[2644]: W0905 00:03:21.002424 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.002488 kubelet[2644]: E0905 00:03:21.002432 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.002654 kubelet[2644]: E0905 00:03:21.002639 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.002683 kubelet[2644]: W0905 00:03:21.002655 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.002683 kubelet[2644]: E0905 00:03:21.002666 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.002815 kubelet[2644]: E0905 00:03:21.002803 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.002815 kubelet[2644]: W0905 00:03:21.002815 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.002868 kubelet[2644]: E0905 00:03:21.002825 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.002956 kubelet[2644]: E0905 00:03:21.002943 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.002956 kubelet[2644]: W0905 00:03:21.002955 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.003020 kubelet[2644]: E0905 00:03:21.002962 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.003122 kubelet[2644]: E0905 00:03:21.003110 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.003122 kubelet[2644]: W0905 00:03:21.003121 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.003184 kubelet[2644]: E0905 00:03:21.003129 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.003428 kubelet[2644]: E0905 00:03:21.003411 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.003520 kubelet[2644]: W0905 00:03:21.003506 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.003574 kubelet[2644]: E0905 00:03:21.003564 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.003814 kubelet[2644]: E0905 00:03:21.003799 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.003879 kubelet[2644]: W0905 00:03:21.003868 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.003936 kubelet[2644]: E0905 00:03:21.003925 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.004150 kubelet[2644]: E0905 00:03:21.004136 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.004356 kubelet[2644]: W0905 00:03:21.004210 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.004356 kubelet[2644]: E0905 00:03:21.004238 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.004509 kubelet[2644]: E0905 00:03:21.004494 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.004896 kubelet[2644]: W0905 00:03:21.004876 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.005510 kubelet[2644]: E0905 00:03:21.005484 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.006784 kubelet[2644]: E0905 00:03:21.005936 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.006784 kubelet[2644]: W0905 00:03:21.005953 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.006784 kubelet[2644]: E0905 00:03:21.005966 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.006992 kubelet[2644]: E0905 00:03:21.006975 2644 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:03:21.007082 kubelet[2644]: W0905 00:03:21.007067 2644 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:03:21.007143 kubelet[2644]: E0905 00:03:21.007132 2644 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:03:21.602428 containerd[1498]: time="2025-09-05T00:03:21.602365699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:21.602978 containerd[1498]: time="2025-09-05T00:03:21.602933414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 00:03:21.603803 containerd[1498]: time="2025-09-05T00:03:21.603766246Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:21.606043 containerd[1498]: time="2025-09-05T00:03:21.605992105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:21.606703 containerd[1498]: time="2025-09-05T00:03:21.606661939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.201620359s" Sep 5 00:03:21.606703 containerd[1498]: time="2025-09-05T00:03:21.606698938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 00:03:21.611066 containerd[1498]: time="2025-09-05T00:03:21.611037298Z" level=info msg="CreateContainer within sandbox \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:03:21.619353 containerd[1498]: time="2025-09-05T00:03:21.619320421Z" level=info msg="Container caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:21.626816 containerd[1498]: time="2025-09-05T00:03:21.626717112Z" level=info msg="CreateContainer within sandbox \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\"" Sep 5 00:03:21.627152 containerd[1498]: time="2025-09-05T00:03:21.627083229Z" level=info msg="StartContainer for \"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\"" Sep 5 00:03:21.628938 containerd[1498]: time="2025-09-05T00:03:21.628833372Z" level=info msg="connecting to shim caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3" address="unix:///run/containerd/s/3cb5c4473c99da6e5dae380f843f9bb4e25acab4a5d637d10a1321fb67e707f3" protocol=ttrpc version=3 Sep 5 00:03:21.659596 systemd[1]: Started cri-containerd-caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3.scope - libcontainer container caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3. Sep 5 00:03:21.690540 containerd[1498]: time="2025-09-05T00:03:21.690502798Z" level=info msg="StartContainer for \"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\" returns successfully" Sep 5 00:03:21.700600 systemd[1]: cri-containerd-caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3.scope: Deactivated successfully. Sep 5 00:03:21.730209 containerd[1498]: time="2025-09-05T00:03:21.730135308Z" level=info msg="received exit event container_id:\"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\" id:\"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\" pid:3351 exited_at:{seconds:1757030601 nanos:716898352}" Sep 5 00:03:21.736471 containerd[1498]: time="2025-09-05T00:03:21.735568018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\" id:\"caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3\" pid:3351 exited_at:{seconds:1757030601 nanos:716898352}" Sep 5 00:03:21.769509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caa80e8f90f77bf5b0b9eb8bce7dc005657280b4013fced5b0efab046b27c6e3-rootfs.mount: Deactivated successfully. Sep 5 00:03:21.921335 kubelet[2644]: I0905 00:03:21.921236 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:03:21.923625 containerd[1498]: time="2025-09-05T00:03:21.923569826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:03:22.830463 kubelet[2644]: E0905 00:03:22.830398 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fs25" podUID="6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580" Sep 5 00:03:24.214659 containerd[1498]: time="2025-09-05T00:03:24.214611236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:24.215599 containerd[1498]: time="2025-09-05T00:03:24.215435149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 00:03:24.216243 containerd[1498]: time="2025-09-05T00:03:24.216213983Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:24.218914 containerd[1498]: time="2025-09-05T00:03:24.218884521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:24.219457 containerd[1498]: time="2025-09-05T00:03:24.219408957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.295790731s" Sep 5 00:03:24.219457 containerd[1498]: time="2025-09-05T00:03:24.219449076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 00:03:24.223096 containerd[1498]: time="2025-09-05T00:03:24.223064927Z" level=info msg="CreateContainer within sandbox \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:03:24.232471 containerd[1498]: time="2025-09-05T00:03:24.231806495Z" level=info msg="Container de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:24.239638 containerd[1498]: time="2025-09-05T00:03:24.239601070Z" level=info msg="CreateContainer within sandbox \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\"" Sep 5 00:03:24.240151 containerd[1498]: time="2025-09-05T00:03:24.240128546Z" level=info msg="StartContainer for \"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\"" Sep 5 00:03:24.243623 containerd[1498]: time="2025-09-05T00:03:24.243592118Z" level=info msg="connecting to shim de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe" address="unix:///run/containerd/s/3cb5c4473c99da6e5dae380f843f9bb4e25acab4a5d637d10a1321fb67e707f3" protocol=ttrpc version=3 Sep 5 00:03:24.307584 systemd[1]: Started cri-containerd-de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe.scope - libcontainer container de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe. Sep 5 00:03:24.345939 containerd[1498]: time="2025-09-05T00:03:24.345894275Z" level=info msg="StartContainer for \"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\" returns successfully" Sep 5 00:03:24.830257 kubelet[2644]: E0905 00:03:24.830194 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fs25" podUID="6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580" Sep 5 00:03:24.862295 systemd[1]: cri-containerd-de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe.scope: Deactivated successfully. Sep 5 00:03:24.862588 systemd[1]: cri-containerd-de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe.scope: Consumed 457ms CPU time, 175.9M memory peak, 2.7M read from disk, 165.8M written to disk. Sep 5 00:03:24.875706 containerd[1498]: time="2025-09-05T00:03:24.875618112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\" id:\"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\" pid:3413 exited_at:{seconds:1757030604 nanos:875038037}" Sep 5 00:03:24.877952 containerd[1498]: time="2025-09-05T00:03:24.877781054Z" level=info msg="received exit event container_id:\"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\" id:\"de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe\" pid:3413 exited_at:{seconds:1757030604 nanos:875038037}" Sep 5 00:03:24.895297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de084812b467c472dde595815e872d9e7e03a4b269f147c9acbd47e5631513fe-rootfs.mount: Deactivated successfully. Sep 5 00:03:24.907149 kubelet[2644]: I0905 00:03:24.907122 2644 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:03:25.020057 systemd[1]: Created slice kubepods-burstable-pod8f833ab8_22e9_482c_a3eb_e267ae46b5b8.slice - libcontainer container kubepods-burstable-pod8f833ab8_22e9_482c_a3eb_e267ae46b5b8.slice. Sep 5 00:03:25.029809 kubelet[2644]: I0905 00:03:25.029771 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfnn\" (UniqueName: \"kubernetes.io/projected/1b310bae-11fe-4356-979f-929d2920e4fe-kube-api-access-2zfnn\") pod \"coredns-674b8bbfcf-fxg5r\" (UID: \"1b310bae-11fe-4356-979f-929d2920e4fe\") " pod="kube-system/coredns-674b8bbfcf-fxg5r" Sep 5 00:03:25.029925 kubelet[2644]: I0905 00:03:25.029826 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/651e4e42-bdd6-4023-8f33-2961e3821ab5-calico-apiserver-certs\") pod \"calico-apiserver-5655588b8f-x8lr2\" (UID: \"651e4e42-bdd6-4023-8f33-2961e3821ab5\") " pod="calico-apiserver/calico-apiserver-5655588b8f-x8lr2" Sep 5 00:03:25.029925 kubelet[2644]: I0905 00:03:25.029849 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thb5f\" (UniqueName: \"kubernetes.io/projected/c5861876-7413-4f2b-beed-2f51ab539333-kube-api-access-thb5f\") pod \"calico-kube-controllers-6488ffd8d-z7pjc\" (UID: \"c5861876-7413-4f2b-beed-2f51ab539333\") " pod="calico-system/calico-kube-controllers-6488ffd8d-z7pjc" Sep 5 00:03:25.029925 kubelet[2644]: I0905 00:03:25.029877 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5861876-7413-4f2b-beed-2f51ab539333-tigera-ca-bundle\") pod \"calico-kube-controllers-6488ffd8d-z7pjc\" (UID: \"c5861876-7413-4f2b-beed-2f51ab539333\") " pod="calico-system/calico-kube-controllers-6488ffd8d-z7pjc" Sep 5 00:03:25.029925 kubelet[2644]: I0905 00:03:25.029895 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b310bae-11fe-4356-979f-929d2920e4fe-config-volume\") pod \"coredns-674b8bbfcf-fxg5r\" (UID: \"1b310bae-11fe-4356-979f-929d2920e4fe\") " pod="kube-system/coredns-674b8bbfcf-fxg5r" Sep 5 00:03:25.030024 kubelet[2644]: I0905 00:03:25.029927 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f833ab8-22e9-482c-a3eb-e267ae46b5b8-config-volume\") pod \"coredns-674b8bbfcf-cz92t\" (UID: \"8f833ab8-22e9-482c-a3eb-e267ae46b5b8\") " pod="kube-system/coredns-674b8bbfcf-cz92t" Sep 5 00:03:25.030024 kubelet[2644]: I0905 00:03:25.029944 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m55m\" (UniqueName: \"kubernetes.io/projected/651e4e42-bdd6-4023-8f33-2961e3821ab5-kube-api-access-5m55m\") pod \"calico-apiserver-5655588b8f-x8lr2\" (UID: \"651e4e42-bdd6-4023-8f33-2961e3821ab5\") " pod="calico-apiserver/calico-apiserver-5655588b8f-x8lr2" Sep 5 00:03:25.030024 kubelet[2644]: I0905 00:03:25.029971 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qg68\" (UniqueName: \"kubernetes.io/projected/8f833ab8-22e9-482c-a3eb-e267ae46b5b8-kube-api-access-4qg68\") pod \"coredns-674b8bbfcf-cz92t\" (UID: \"8f833ab8-22e9-482c-a3eb-e267ae46b5b8\") " pod="kube-system/coredns-674b8bbfcf-cz92t" Sep 5 00:03:25.030593 systemd[1]: Created slice kubepods-besteffort-podc5861876_7413_4f2b_beed_2f51ab539333.slice - libcontainer container kubepods-besteffort-podc5861876_7413_4f2b_beed_2f51ab539333.slice. Sep 5 00:03:25.039023 systemd[1]: Created slice kubepods-burstable-pod1b310bae_11fe_4356_979f_929d2920e4fe.slice - libcontainer container kubepods-burstable-pod1b310bae_11fe_4356_979f_929d2920e4fe.slice. Sep 5 00:03:25.045707 systemd[1]: Created slice kubepods-besteffort-pod651e4e42_bdd6_4023_8f33_2961e3821ab5.slice - libcontainer container kubepods-besteffort-pod651e4e42_bdd6_4023_8f33_2961e3821ab5.slice. Sep 5 00:03:25.050118 systemd[1]: Created slice kubepods-besteffort-pod549a4da7_aa1c_459a_8093_1fbfc4f5bb3d.slice - libcontainer container kubepods-besteffort-pod549a4da7_aa1c_459a_8093_1fbfc4f5bb3d.slice. Sep 5 00:03:25.059118 systemd[1]: Created slice kubepods-besteffort-pod9fdc45e1_c13b_4a58_b814_a8f24e60f8eb.slice - libcontainer container kubepods-besteffort-pod9fdc45e1_c13b_4a58_b814_a8f24e60f8eb.slice. Sep 5 00:03:25.063727 systemd[1]: Created slice kubepods-besteffort-pod8ed13891_25f7_4c98_b611_d4d3148e80ea.slice - libcontainer container kubepods-besteffort-pod8ed13891_25f7_4c98_b611_d4d3148e80ea.slice. Sep 5 00:03:25.131282 kubelet[2644]: I0905 00:03:25.131140 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5x47\" (UniqueName: \"kubernetes.io/projected/8ed13891-25f7-4c98-b611-d4d3148e80ea-kube-api-access-w5x47\") pod \"calico-apiserver-5655588b8f-wpbgb\" (UID: \"8ed13891-25f7-4c98-b611-d4d3148e80ea\") " pod="calico-apiserver/calico-apiserver-5655588b8f-wpbgb" Sep 5 00:03:25.132530 kubelet[2644]: I0905 00:03:25.131319 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fdc45e1-c13b-4a58-b814-a8f24e60f8eb-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-lqqg4\" (UID: \"9fdc45e1-c13b-4a58-b814-a8f24e60f8eb\") " pod="calico-system/goldmane-54d579b49d-lqqg4" Sep 5 00:03:25.132530 kubelet[2644]: I0905 00:03:25.131371 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlj68\" (UniqueName: \"kubernetes.io/projected/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-kube-api-access-qlj68\") pod \"whisker-6b998b48f5-ddbns\" (UID: \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\") " pod="calico-system/whisker-6b998b48f5-ddbns" Sep 5 00:03:25.132530 kubelet[2644]: I0905 00:03:25.131452 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdc45e1-c13b-4a58-b814-a8f24e60f8eb-config\") pod \"goldmane-54d579b49d-lqqg4\" (UID: \"9fdc45e1-c13b-4a58-b814-a8f24e60f8eb\") " pod="calico-system/goldmane-54d579b49d-lqqg4" Sep 5 00:03:25.132530 kubelet[2644]: I0905 00:03:25.131480 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-backend-key-pair\") pod \"whisker-6b998b48f5-ddbns\" (UID: \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\") " pod="calico-system/whisker-6b998b48f5-ddbns" Sep 5 00:03:25.133664 kubelet[2644]: I0905 00:03:25.132678 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-ca-bundle\") pod \"whisker-6b998b48f5-ddbns\" (UID: \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\") " pod="calico-system/whisker-6b998b48f5-ddbns" Sep 5 00:03:25.133664 kubelet[2644]: I0905 00:03:25.132731 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ed13891-25f7-4c98-b611-d4d3148e80ea-calico-apiserver-certs\") pod \"calico-apiserver-5655588b8f-wpbgb\" (UID: \"8ed13891-25f7-4c98-b611-d4d3148e80ea\") " pod="calico-apiserver/calico-apiserver-5655588b8f-wpbgb" Sep 5 00:03:25.133664 kubelet[2644]: I0905 00:03:25.132795 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9fdc45e1-c13b-4a58-b814-a8f24e60f8eb-goldmane-key-pair\") pod \"goldmane-54d579b49d-lqqg4\" (UID: \"9fdc45e1-c13b-4a58-b814-a8f24e60f8eb\") " pod="calico-system/goldmane-54d579b49d-lqqg4" Sep 5 00:03:25.133664 kubelet[2644]: I0905 00:03:25.132822 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p97z\" (UniqueName: \"kubernetes.io/projected/9fdc45e1-c13b-4a58-b814-a8f24e60f8eb-kube-api-access-2p97z\") pod \"goldmane-54d579b49d-lqqg4\" (UID: \"9fdc45e1-c13b-4a58-b814-a8f24e60f8eb\") " pod="calico-system/goldmane-54d579b49d-lqqg4" Sep 5 00:03:25.325970 containerd[1498]: time="2025-09-05T00:03:25.325936625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cz92t,Uid:8f833ab8-22e9-482c-a3eb-e267ae46b5b8,Namespace:kube-system,Attempt:0,}" Sep 5 00:03:25.334609 containerd[1498]: time="2025-09-05T00:03:25.334568717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488ffd8d-z7pjc,Uid:c5861876-7413-4f2b-beed-2f51ab539333,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:25.343538 containerd[1498]: time="2025-09-05T00:03:25.343496766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxg5r,Uid:1b310bae-11fe-4356-979f-929d2920e4fe,Namespace:kube-system,Attempt:0,}" Sep 5 00:03:25.348679 containerd[1498]: time="2025-09-05T00:03:25.348639085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-x8lr2,Uid:651e4e42-bdd6-4023-8f33-2961e3821ab5,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:03:25.354071 containerd[1498]: time="2025-09-05T00:03:25.354022923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b998b48f5-ddbns,Uid:549a4da7-aa1c-459a-8093-1fbfc4f5bb3d,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:25.363342 containerd[1498]: time="2025-09-05T00:03:25.363303449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lqqg4,Uid:9fdc45e1-c13b-4a58-b814-a8f24e60f8eb,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:25.367794 containerd[1498]: time="2025-09-05T00:03:25.367760854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-wpbgb,Uid:8ed13891-25f7-4c98-b611-d4d3148e80ea,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:03:25.453648 containerd[1498]: time="2025-09-05T00:03:25.453593334Z" level=error msg="Failed to destroy network for sandbox \"b04d8c6756b470738843632ca71fb2c39d20a0d62914f110ba8793bf6751c8bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.455525 containerd[1498]: time="2025-09-05T00:03:25.455423000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488ffd8d-z7pjc,Uid:c5861876-7413-4f2b-beed-2f51ab539333,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04d8c6756b470738843632ca71fb2c39d20a0d62914f110ba8793bf6751c8bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.459203 kubelet[2644]: E0905 00:03:25.459133 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04d8c6756b470738843632ca71fb2c39d20a0d62914f110ba8793bf6751c8bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.459312 kubelet[2644]: E0905 00:03:25.459237 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04d8c6756b470738843632ca71fb2c39d20a0d62914f110ba8793bf6751c8bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6488ffd8d-z7pjc" Sep 5 00:03:25.459312 kubelet[2644]: E0905 00:03:25.459259 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04d8c6756b470738843632ca71fb2c39d20a0d62914f110ba8793bf6751c8bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6488ffd8d-z7pjc" Sep 5 00:03:25.459703 kubelet[2644]: E0905 00:03:25.459663 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6488ffd8d-z7pjc_calico-system(c5861876-7413-4f2b-beed-2f51ab539333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6488ffd8d-z7pjc_calico-system(c5861876-7413-4f2b-beed-2f51ab539333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b04d8c6756b470738843632ca71fb2c39d20a0d62914f110ba8793bf6751c8bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6488ffd8d-z7pjc" podUID="c5861876-7413-4f2b-beed-2f51ab539333" Sep 5 00:03:25.461621 containerd[1498]: time="2025-09-05T00:03:25.461575431Z" level=error msg="Failed to destroy network for sandbox \"2bdf27aebe080c43af3f15a381b5681d349e4d88449cec921cb864ca23fb4113\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.466570 containerd[1498]: time="2025-09-05T00:03:25.466522632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b998b48f5-ddbns,Uid:549a4da7-aa1c-459a-8093-1fbfc4f5bb3d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf27aebe080c43af3f15a381b5681d349e4d88449cec921cb864ca23fb4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.467287 kubelet[2644]: E0905 00:03:25.466890 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf27aebe080c43af3f15a381b5681d349e4d88449cec921cb864ca23fb4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.467287 kubelet[2644]: E0905 00:03:25.466956 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf27aebe080c43af3f15a381b5681d349e4d88449cec921cb864ca23fb4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b998b48f5-ddbns" Sep 5 00:03:25.467287 kubelet[2644]: E0905 00:03:25.466976 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdf27aebe080c43af3f15a381b5681d349e4d88449cec921cb864ca23fb4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b998b48f5-ddbns" Sep 5 00:03:25.467579 kubelet[2644]: E0905 00:03:25.467028 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b998b48f5-ddbns_calico-system(549a4da7-aa1c-459a-8093-1fbfc4f5bb3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b998b48f5-ddbns_calico-system(549a4da7-aa1c-459a-8093-1fbfc4f5bb3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bdf27aebe080c43af3f15a381b5681d349e4d88449cec921cb864ca23fb4113\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b998b48f5-ddbns" podUID="549a4da7-aa1c-459a-8093-1fbfc4f5bb3d" Sep 5 00:03:25.469820 containerd[1498]: time="2025-09-05T00:03:25.469779966Z" level=error msg="Failed to destroy network for sandbox \"0fe49435306c778e0c0811d3299ffc7d6ae1453590744e2872b931c5518de141\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.471831 containerd[1498]: time="2025-09-05T00:03:25.471793150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-wpbgb,Uid:8ed13891-25f7-4c98-b611-d4d3148e80ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe49435306c778e0c0811d3299ffc7d6ae1453590744e2872b931c5518de141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.472188 kubelet[2644]: E0905 00:03:25.472145 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe49435306c778e0c0811d3299ffc7d6ae1453590744e2872b931c5518de141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.472254 kubelet[2644]: E0905 00:03:25.472213 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe49435306c778e0c0811d3299ffc7d6ae1453590744e2872b931c5518de141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5655588b8f-wpbgb" Sep 5 00:03:25.472254 kubelet[2644]: E0905 00:03:25.472234 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe49435306c778e0c0811d3299ffc7d6ae1453590744e2872b931c5518de141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5655588b8f-wpbgb" Sep 5 00:03:25.472298 kubelet[2644]: E0905 00:03:25.472278 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5655588b8f-wpbgb_calico-apiserver(8ed13891-25f7-4c98-b611-d4d3148e80ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5655588b8f-wpbgb_calico-apiserver(8ed13891-25f7-4c98-b611-d4d3148e80ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe49435306c778e0c0811d3299ffc7d6ae1453590744e2872b931c5518de141\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5655588b8f-wpbgb" podUID="8ed13891-25f7-4c98-b611-d4d3148e80ea" Sep 5 00:03:25.475450 containerd[1498]: time="2025-09-05T00:03:25.475407041Z" level=error msg="Failed to destroy network for sandbox \"d77bb6c19c9866b98c44de7702b45a01bedb35952769f3758f873a5d831a8596\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.477009 containerd[1498]: time="2025-09-05T00:03:25.476974149Z" level=error msg="Failed to destroy network for sandbox \"ddf3a7b3c143bdcbcd719e6a508b0669a7e9f0106649cf02db770b96bde8acb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.477775 containerd[1498]: time="2025-09-05T00:03:25.477741983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cz92t,Uid:8f833ab8-22e9-482c-a3eb-e267ae46b5b8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77bb6c19c9866b98c44de7702b45a01bedb35952769f3758f873a5d831a8596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.478014 kubelet[2644]: E0905 00:03:25.477977 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77bb6c19c9866b98c44de7702b45a01bedb35952769f3758f873a5d831a8596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.478077 kubelet[2644]: E0905 00:03:25.478032 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77bb6c19c9866b98c44de7702b45a01bedb35952769f3758f873a5d831a8596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cz92t" Sep 5 00:03:25.478077 kubelet[2644]: E0905 00:03:25.478053 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77bb6c19c9866b98c44de7702b45a01bedb35952769f3758f873a5d831a8596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cz92t" Sep 5 00:03:25.478121 kubelet[2644]: E0905 00:03:25.478093 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cz92t_kube-system(8f833ab8-22e9-482c-a3eb-e267ae46b5b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cz92t_kube-system(8f833ab8-22e9-482c-a3eb-e267ae46b5b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d77bb6c19c9866b98c44de7702b45a01bedb35952769f3758f873a5d831a8596\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cz92t" podUID="8f833ab8-22e9-482c-a3eb-e267ae46b5b8" Sep 5 00:03:25.479131 containerd[1498]: time="2025-09-05T00:03:25.478930853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxg5r,Uid:1b310bae-11fe-4356-979f-929d2920e4fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf3a7b3c143bdcbcd719e6a508b0669a7e9f0106649cf02db770b96bde8acb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.479214 kubelet[2644]: E0905 00:03:25.479097 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf3a7b3c143bdcbcd719e6a508b0669a7e9f0106649cf02db770b96bde8acb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.479214 kubelet[2644]: E0905 00:03:25.479156 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf3a7b3c143bdcbcd719e6a508b0669a7e9f0106649cf02db770b96bde8acb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fxg5r" Sep 5 00:03:25.479214 kubelet[2644]: E0905 00:03:25.479186 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf3a7b3c143bdcbcd719e6a508b0669a7e9f0106649cf02db770b96bde8acb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fxg5r" Sep 5 00:03:25.479343 kubelet[2644]: E0905 00:03:25.479228 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fxg5r_kube-system(1b310bae-11fe-4356-979f-929d2920e4fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fxg5r_kube-system(1b310bae-11fe-4356-979f-929d2920e4fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddf3a7b3c143bdcbcd719e6a508b0669a7e9f0106649cf02db770b96bde8acb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fxg5r" podUID="1b310bae-11fe-4356-979f-929d2920e4fe" Sep 5 00:03:25.482169 containerd[1498]: time="2025-09-05T00:03:25.481554153Z" level=error msg="Failed to destroy network for sandbox \"f63ed3c246dac1d3f67ebdaa751bb454874d4c5f1900f76cec27abf26bcdf129\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.483741 containerd[1498]: time="2025-09-05T00:03:25.483615976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-x8lr2,Uid:651e4e42-bdd6-4023-8f33-2961e3821ab5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63ed3c246dac1d3f67ebdaa751bb454874d4c5f1900f76cec27abf26bcdf129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.483952 kubelet[2644]: E0905 00:03:25.483916 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63ed3c246dac1d3f67ebdaa751bb454874d4c5f1900f76cec27abf26bcdf129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.484002 kubelet[2644]: E0905 00:03:25.483984 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63ed3c246dac1d3f67ebdaa751bb454874d4c5f1900f76cec27abf26bcdf129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5655588b8f-x8lr2" Sep 5 00:03:25.484030 kubelet[2644]: E0905 00:03:25.484007 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63ed3c246dac1d3f67ebdaa751bb454874d4c5f1900f76cec27abf26bcdf129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5655588b8f-x8lr2" Sep 5 00:03:25.486523 containerd[1498]: time="2025-09-05T00:03:25.486465794Z" level=error msg="Failed to destroy network for sandbox \"55abffef6f6a5b95f5a7b3d7f5627bf7d7eae176a05ce0bc5512e5f691bf8edd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.488398 containerd[1498]: time="2025-09-05T00:03:25.488325779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lqqg4,Uid:9fdc45e1-c13b-4a58-b814-a8f24e60f8eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55abffef6f6a5b95f5a7b3d7f5627bf7d7eae176a05ce0bc5512e5f691bf8edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.488807 kubelet[2644]: E0905 00:03:25.484055 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5655588b8f-x8lr2_calico-apiserver(651e4e42-bdd6-4023-8f33-2961e3821ab5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5655588b8f-x8lr2_calico-apiserver(651e4e42-bdd6-4023-8f33-2961e3821ab5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63ed3c246dac1d3f67ebdaa751bb454874d4c5f1900f76cec27abf26bcdf129\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5655588b8f-x8lr2" podUID="651e4e42-bdd6-4023-8f33-2961e3821ab5" Sep 5 00:03:25.489102 kubelet[2644]: E0905 00:03:25.489075 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55abffef6f6a5b95f5a7b3d7f5627bf7d7eae176a05ce0bc5512e5f691bf8edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:25.489154 kubelet[2644]: E0905 00:03:25.489118 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55abffef6f6a5b95f5a7b3d7f5627bf7d7eae176a05ce0bc5512e5f691bf8edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-lqqg4" Sep 5 00:03:25.489154 kubelet[2644]: E0905 00:03:25.489135 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55abffef6f6a5b95f5a7b3d7f5627bf7d7eae176a05ce0bc5512e5f691bf8edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-lqqg4" Sep 5 00:03:25.489222 kubelet[2644]: E0905 00:03:25.489177 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-lqqg4_calico-system(9fdc45e1-c13b-4a58-b814-a8f24e60f8eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-lqqg4_calico-system(9fdc45e1-c13b-4a58-b814-a8f24e60f8eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55abffef6f6a5b95f5a7b3d7f5627bf7d7eae176a05ce0bc5512e5f691bf8edd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-lqqg4" podUID="9fdc45e1-c13b-4a58-b814-a8f24e60f8eb" Sep 5 00:03:25.940605 containerd[1498]: time="2025-09-05T00:03:25.940521397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:03:26.234134 systemd[1]: run-netns-cni\x2dc3e76226\x2d5d8a\x2ddd1d\x2d87d3\x2d036d7104eed7.mount: Deactivated successfully. Sep 5 00:03:26.839829 systemd[1]: Created slice kubepods-besteffort-pod6bfd9129_15f0_4f2a_bd2f_cf1ce0ff2580.slice - libcontainer container kubepods-besteffort-pod6bfd9129_15f0_4f2a_bd2f_cf1ce0ff2580.slice. Sep 5 00:03:26.842691 containerd[1498]: time="2025-09-05T00:03:26.842660462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fs25,Uid:6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:26.890273 containerd[1498]: time="2025-09-05T00:03:26.890227419Z" level=error msg="Failed to destroy network for sandbox \"5f561129d641cc3aeab7b769ce3c8bf142f9bd501f4ab87ae6b645683370cc1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:26.892055 systemd[1]: run-netns-cni\x2d78b66db7\x2d4e37\x2d8832\x2d6ee4\x2dadfbe1ae08b0.mount: Deactivated successfully. Sep 5 00:03:26.893744 containerd[1498]: time="2025-09-05T00:03:26.893658073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fs25,Uid:6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f561129d641cc3aeab7b769ce3c8bf142f9bd501f4ab87ae6b645683370cc1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:26.894462 kubelet[2644]: E0905 00:03:26.894398 2644 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f561129d641cc3aeab7b769ce3c8bf142f9bd501f4ab87ae6b645683370cc1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:03:26.894919 kubelet[2644]: E0905 00:03:26.894773 2644 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f561129d641cc3aeab7b769ce3c8bf142f9bd501f4ab87ae6b645683370cc1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:26.894919 kubelet[2644]: E0905 00:03:26.894805 2644 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f561129d641cc3aeab7b769ce3c8bf142f9bd501f4ab87ae6b645683370cc1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fs25" Sep 5 00:03:26.894919 kubelet[2644]: E0905 00:03:26.894860 2644 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fs25_calico-system(6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fs25_calico-system(6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f561129d641cc3aeab7b769ce3c8bf142f9bd501f4ab87ae6b645683370cc1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fs25" podUID="6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580" Sep 5 00:03:29.835090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598459325.mount: Deactivated successfully. Sep 5 00:03:30.118543 containerd[1498]: time="2025-09-05T00:03:30.117880288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:30.118543 containerd[1498]: time="2025-09-05T00:03:30.118330125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 00:03:30.119254 containerd[1498]: time="2025-09-05T00:03:30.119193640Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:30.121456 containerd[1498]: time="2025-09-05T00:03:30.120991028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:30.121610 containerd[1498]: time="2025-09-05T00:03:30.121584704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.180996507s" Sep 5 00:03:30.121677 containerd[1498]: time="2025-09-05T00:03:30.121618743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 00:03:30.130366 containerd[1498]: time="2025-09-05T00:03:30.130324806Z" level=info msg="CreateContainer within sandbox \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:03:30.139461 containerd[1498]: time="2025-09-05T00:03:30.138728750Z" level=info msg="Container 20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:30.148684 containerd[1498]: time="2025-09-05T00:03:30.148635525Z" level=info msg="CreateContainer within sandbox \"3e7e818c4033c5511acff79b543f789cc1d6ff269705842a2e14f4992d478db1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea\"" Sep 5 00:03:30.149364 containerd[1498]: time="2025-09-05T00:03:30.149310480Z" level=info msg="StartContainer for \"20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea\"" Sep 5 00:03:30.150842 containerd[1498]: time="2025-09-05T00:03:30.150802031Z" level=info msg="connecting to shim 20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea" address="unix:///run/containerd/s/3cb5c4473c99da6e5dae380f843f9bb4e25acab4a5d637d10a1321fb67e707f3" protocol=ttrpc version=3 Sep 5 00:03:30.177608 systemd[1]: Started cri-containerd-20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea.scope - libcontainer container 20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea. Sep 5 00:03:30.214833 containerd[1498]: time="2025-09-05T00:03:30.214794127Z" level=info msg="StartContainer for \"20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea\" returns successfully" Sep 5 00:03:30.329701 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:03:30.329802 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:03:30.576121 kubelet[2644]: I0905 00:03:30.576062 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-backend-key-pair\") pod \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\" (UID: \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\") " Sep 5 00:03:30.576121 kubelet[2644]: I0905 00:03:30.576115 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-ca-bundle\") pod \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\" (UID: \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\") " Sep 5 00:03:30.576641 kubelet[2644]: I0905 00:03:30.576144 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlj68\" (UniqueName: \"kubernetes.io/projected/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-kube-api-access-qlj68\") pod \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\" (UID: \"549a4da7-aa1c-459a-8093-1fbfc4f5bb3d\") " Sep 5 00:03:30.605221 kubelet[2644]: I0905 00:03:30.605158 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "549a4da7-aa1c-459a-8093-1fbfc4f5bb3d" (UID: "549a4da7-aa1c-459a-8093-1fbfc4f5bb3d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:03:30.605821 kubelet[2644]: I0905 00:03:30.605791 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-kube-api-access-qlj68" (OuterVolumeSpecName: "kube-api-access-qlj68") pod "549a4da7-aa1c-459a-8093-1fbfc4f5bb3d" (UID: "549a4da7-aa1c-459a-8093-1fbfc4f5bb3d"). InnerVolumeSpecName "kube-api-access-qlj68". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:03:30.606187 kubelet[2644]: I0905 00:03:30.606151 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "549a4da7-aa1c-459a-8093-1fbfc4f5bb3d" (UID: "549a4da7-aa1c-459a-8093-1fbfc4f5bb3d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:03:30.676692 kubelet[2644]: I0905 00:03:30.676617 2644 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:03:30.676692 kubelet[2644]: I0905 00:03:30.676659 2644 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:03:30.676692 kubelet[2644]: I0905 00:03:30.676668 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlj68\" (UniqueName: \"kubernetes.io/projected/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d-kube-api-access-qlj68\") on node \"localhost\" DevicePath \"\"" Sep 5 00:03:30.836772 systemd[1]: var-lib-kubelet-pods-549a4da7\x2daa1c\x2d459a\x2d8093\x2d1fbfc4f5bb3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqlj68.mount: Deactivated successfully. Sep 5 00:03:30.836863 systemd[1]: var-lib-kubelet-pods-549a4da7\x2daa1c\x2d459a\x2d8093\x2d1fbfc4f5bb3d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:03:30.843013 systemd[1]: Removed slice kubepods-besteffort-pod549a4da7_aa1c_459a_8093_1fbfc4f5bb3d.slice - libcontainer container kubepods-besteffort-pod549a4da7_aa1c_459a_8093_1fbfc4f5bb3d.slice. Sep 5 00:03:30.972668 kubelet[2644]: I0905 00:03:30.972608 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9zdpt" podStartSLOduration=1.789589861 podStartE2EDuration="12.97061597s" podCreationTimestamp="2025-09-05 00:03:18 +0000 UTC" firstStartedPulling="2025-09-05 00:03:18.94120423 +0000 UTC m=+22.200151815" lastFinishedPulling="2025-09-05 00:03:30.122230379 +0000 UTC m=+33.381177924" observedRunningTime="2025-09-05 00:03:30.969513617 +0000 UTC m=+34.228461202" watchObservedRunningTime="2025-09-05 00:03:30.97061597 +0000 UTC m=+34.229563555" Sep 5 00:03:31.027435 systemd[1]: Created slice kubepods-besteffort-podd67988d8_9bcb_429e_8168_bc196cd0260b.slice - libcontainer container kubepods-besteffort-podd67988d8_9bcb_429e_8168_bc196cd0260b.slice. Sep 5 00:03:31.079304 kubelet[2644]: I0905 00:03:31.079263 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d67988d8-9bcb-429e-8168-bc196cd0260b-whisker-backend-key-pair\") pod \"whisker-749cbb9f7d-grx7f\" (UID: \"d67988d8-9bcb-429e-8168-bc196cd0260b\") " pod="calico-system/whisker-749cbb9f7d-grx7f" Sep 5 00:03:31.079790 kubelet[2644]: I0905 00:03:31.079771 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d67988d8-9bcb-429e-8168-bc196cd0260b-whisker-ca-bundle\") pod \"whisker-749cbb9f7d-grx7f\" (UID: \"d67988d8-9bcb-429e-8168-bc196cd0260b\") " pod="calico-system/whisker-749cbb9f7d-grx7f" Sep 5 00:03:31.080028 kubelet[2644]: I0905 00:03:31.079979 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnls\" (UniqueName: \"kubernetes.io/projected/d67988d8-9bcb-429e-8168-bc196cd0260b-kube-api-access-tqnls\") pod \"whisker-749cbb9f7d-grx7f\" (UID: \"d67988d8-9bcb-429e-8168-bc196cd0260b\") " pod="calico-system/whisker-749cbb9f7d-grx7f" Sep 5 00:03:31.332334 containerd[1498]: time="2025-09-05T00:03:31.332287730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-749cbb9f7d-grx7f,Uid:d67988d8-9bcb-429e-8168-bc196cd0260b,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:31.507655 systemd-networkd[1438]: cali89958ac00b0: Link UP Sep 5 00:03:31.507822 systemd-networkd[1438]: cali89958ac00b0: Gained carrier Sep 5 00:03:31.523811 containerd[1498]: 2025-09-05 00:03:31.360 [INFO][3793] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:03:31.523811 containerd[1498]: 2025-09-05 00:03:31.389 [INFO][3793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--749cbb9f7d--grx7f-eth0 whisker-749cbb9f7d- calico-system d67988d8-9bcb-429e-8168-bc196cd0260b 879 0 2025-09-05 00:03:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:749cbb9f7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-749cbb9f7d-grx7f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali89958ac00b0 [] [] }} ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-" Sep 5 00:03:31.523811 containerd[1498]: 2025-09-05 00:03:31.389 [INFO][3793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.523811 containerd[1498]: 2025-09-05 00:03:31.455 [INFO][3807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" HandleID="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Workload="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.456 [INFO][3807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" HandleID="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Workload="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000556bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-749cbb9f7d-grx7f", "timestamp":"2025-09-05 00:03:31.45581154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.456 [INFO][3807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.456 [INFO][3807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.456 [INFO][3807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.469 [INFO][3807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" host="localhost" Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.475 [INFO][3807] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.479 [INFO][3807] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.482 [INFO][3807] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.485 [INFO][3807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:31.524044 containerd[1498]: 2025-09-05 00:03:31.485 [INFO][3807] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" host="localhost" Sep 5 00:03:31.524246 containerd[1498]: 2025-09-05 00:03:31.487 [INFO][3807] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86 Sep 5 00:03:31.524246 containerd[1498]: 2025-09-05 00:03:31.491 [INFO][3807] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" host="localhost" Sep 5 00:03:31.524246 containerd[1498]: 2025-09-05 00:03:31.496 [INFO][3807] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" host="localhost" Sep 5 00:03:31.524246 containerd[1498]: 2025-09-05 00:03:31.496 [INFO][3807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" host="localhost" Sep 5 00:03:31.524246 containerd[1498]: 2025-09-05 00:03:31.496 [INFO][3807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:31.524246 containerd[1498]: 2025-09-05 00:03:31.496 [INFO][3807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" HandleID="k8s-pod-network.9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Workload="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.524354 containerd[1498]: 2025-09-05 00:03:31.498 [INFO][3793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--749cbb9f7d--grx7f-eth0", GenerateName:"whisker-749cbb9f7d-", Namespace:"calico-system", SelfLink:"", UID:"d67988d8-9bcb-429e-8168-bc196cd0260b", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"749cbb9f7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-749cbb9f7d-grx7f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali89958ac00b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:31.524354 containerd[1498]: 2025-09-05 00:03:31.499 [INFO][3793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.524430 containerd[1498]: 2025-09-05 00:03:31.500 [INFO][3793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89958ac00b0 ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.524430 containerd[1498]: 2025-09-05 00:03:31.508 [INFO][3793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.524508 containerd[1498]: 2025-09-05 00:03:31.509 [INFO][3793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--749cbb9f7d--grx7f-eth0", GenerateName:"whisker-749cbb9f7d-", Namespace:"calico-system", SelfLink:"", UID:"d67988d8-9bcb-429e-8168-bc196cd0260b", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"749cbb9f7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86", Pod:"whisker-749cbb9f7d-grx7f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali89958ac00b0", MAC:"56:cb:c1:ad:b4:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:31.524559 containerd[1498]: 2025-09-05 00:03:31.521 [INFO][3793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" Namespace="calico-system" Pod="whisker-749cbb9f7d-grx7f" WorkloadEndpoint="localhost-k8s-whisker--749cbb9f7d--grx7f-eth0" Sep 5 00:03:31.585167 containerd[1498]: time="2025-09-05T00:03:31.584399877Z" level=info msg="connecting to shim 9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86" address="unix:///run/containerd/s/57416d90989165c29ef845c364006a2b6291435176a97e189eef8e6447f1ca3f" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:31.610582 systemd[1]: Started cri-containerd-9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86.scope - libcontainer container 9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86. Sep 5 00:03:31.620310 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:31.646462 containerd[1498]: time="2025-09-05T00:03:31.645787164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-749cbb9f7d-grx7f,Uid:d67988d8-9bcb-429e-8168-bc196cd0260b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86\"" Sep 5 00:03:31.648125 containerd[1498]: time="2025-09-05T00:03:31.647933351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:03:31.959495 kubelet[2644]: I0905 00:03:31.959454 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:03:32.664065 containerd[1498]: time="2025-09-05T00:03:32.664011184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:32.665213 containerd[1498]: time="2025-09-05T00:03:32.665171057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 00:03:32.666133 containerd[1498]: time="2025-09-05T00:03:32.666105091Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:32.668599 containerd[1498]: time="2025-09-05T00:03:32.668565956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:32.669213 containerd[1498]: time="2025-09-05T00:03:32.669177952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.021216562s" Sep 5 00:03:32.669245 containerd[1498]: time="2025-09-05T00:03:32.669210992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 00:03:32.676000 containerd[1498]: time="2025-09-05T00:03:32.675964870Z" level=info msg="CreateContainer within sandbox \"9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:03:32.682329 containerd[1498]: time="2025-09-05T00:03:32.681562155Z" level=info msg="Container 38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:32.689105 containerd[1498]: time="2025-09-05T00:03:32.689058229Z" level=info msg="CreateContainer within sandbox \"9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86\"" Sep 5 00:03:32.689519 containerd[1498]: time="2025-09-05T00:03:32.689495146Z" level=info msg="StartContainer for \"38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86\"" Sep 5 00:03:32.696896 containerd[1498]: time="2025-09-05T00:03:32.696853061Z" level=info msg="connecting to shim 38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86" address="unix:///run/containerd/s/57416d90989165c29ef845c364006a2b6291435176a97e189eef8e6447f1ca3f" protocol=ttrpc version=3 Sep 5 00:03:32.715742 systemd[1]: Started cri-containerd-38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86.scope - libcontainer container 38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86. Sep 5 00:03:32.716570 systemd-networkd[1438]: cali89958ac00b0: Gained IPv6LL Sep 5 00:03:32.758462 containerd[1498]: time="2025-09-05T00:03:32.758413079Z" level=info msg="StartContainer for \"38639e7c3386f510e680ec4600feba6e8f6a113b810916e5d1e13e42c6337b86\" returns successfully" Sep 5 00:03:32.760602 containerd[1498]: time="2025-09-05T00:03:32.760576626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:03:32.838373 kubelet[2644]: I0905 00:03:32.838327 2644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549a4da7-aa1c-459a-8093-1fbfc4f5bb3d" path="/var/lib/kubelet/pods/549a4da7-aa1c-459a-8093-1fbfc4f5bb3d/volumes" Sep 5 00:03:34.261831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753400708.mount: Deactivated successfully. Sep 5 00:03:34.276156 containerd[1498]: time="2025-09-05T00:03:34.275672607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:34.276156 containerd[1498]: time="2025-09-05T00:03:34.276019005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 00:03:34.276902 containerd[1498]: time="2025-09-05T00:03:34.276860360Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:34.278688 containerd[1498]: time="2025-09-05T00:03:34.278664710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:34.280024 containerd[1498]: time="2025-09-05T00:03:34.279995542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.519391036s" Sep 5 00:03:34.280024 containerd[1498]: time="2025-09-05T00:03:34.280023662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 00:03:34.284177 containerd[1498]: time="2025-09-05T00:03:34.284128518Z" level=info msg="CreateContainer within sandbox \"9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:03:34.292479 containerd[1498]: time="2025-09-05T00:03:34.291427395Z" level=info msg="Container 88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:34.304136 containerd[1498]: time="2025-09-05T00:03:34.304077442Z" level=info msg="CreateContainer within sandbox \"9a2b1d565bc6f7c0ee50f2a5f61c1faacf7bb4aba6bb184950bf63e03ec9fe86\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6\"" Sep 5 00:03:34.305703 containerd[1498]: time="2025-09-05T00:03:34.305598833Z" level=info msg="StartContainer for \"88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6\"" Sep 5 00:03:34.306891 containerd[1498]: time="2025-09-05T00:03:34.306862665Z" level=info msg="connecting to shim 88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6" address="unix:///run/containerd/s/57416d90989165c29ef845c364006a2b6291435176a97e189eef8e6447f1ca3f" protocol=ttrpc version=3 Sep 5 00:03:34.326593 systemd[1]: Started cri-containerd-88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6.scope - libcontainer container 88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6. Sep 5 00:03:34.360501 containerd[1498]: time="2025-09-05T00:03:34.360418753Z" level=info msg="StartContainer for \"88aa23726574540a0d6dc949a8fa194df00062837c6323afd906dbf10beb93f6\" returns successfully" Sep 5 00:03:35.018115 kubelet[2644]: I0905 00:03:35.017974 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-749cbb9f7d-grx7f" podStartSLOduration=2.384831378 podStartE2EDuration="5.017942563s" podCreationTimestamp="2025-09-05 00:03:30 +0000 UTC" firstStartedPulling="2025-09-05 00:03:31.647565753 +0000 UTC m=+34.906513338" lastFinishedPulling="2025-09-05 00:03:34.280676978 +0000 UTC m=+37.539624523" observedRunningTime="2025-09-05 00:03:35.017107287 +0000 UTC m=+38.276054912" watchObservedRunningTime="2025-09-05 00:03:35.017942563 +0000 UTC m=+38.276890148" Sep 5 00:03:35.830716 containerd[1498]: time="2025-09-05T00:03:35.830538400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cz92t,Uid:8f833ab8-22e9-482c-a3eb-e267ae46b5b8,Namespace:kube-system,Attempt:0,}" Sep 5 00:03:35.926182 systemd-networkd[1438]: cali5ed158cde7c: Link UP Sep 5 00:03:35.927004 systemd-networkd[1438]: cali5ed158cde7c: Gained carrier Sep 5 00:03:35.939730 containerd[1498]: 2025-09-05 00:03:35.851 [INFO][4126] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:03:35.939730 containerd[1498]: 2025-09-05 00:03:35.865 [INFO][4126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cz92t-eth0 coredns-674b8bbfcf- kube-system 8f833ab8-22e9-482c-a3eb-e267ae46b5b8 798 0 2025-09-05 00:03:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cz92t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ed158cde7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-" Sep 5 00:03:35.939730 containerd[1498]: 2025-09-05 00:03:35.865 [INFO][4126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.939730 containerd[1498]: 2025-09-05 00:03:35.887 [INFO][4141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" HandleID="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Workload="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.887 [INFO][4141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" HandleID="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Workload="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cz92t", "timestamp":"2025-09-05 00:03:35.887299838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.887 [INFO][4141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.887 [INFO][4141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.887 [INFO][4141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.896 [INFO][4141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" host="localhost" Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.903 [INFO][4141] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.908 [INFO][4141] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.910 [INFO][4141] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.911 [INFO][4141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:35.939931 containerd[1498]: 2025-09-05 00:03:35.911 [INFO][4141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" host="localhost" Sep 5 00:03:35.940144 containerd[1498]: 2025-09-05 00:03:35.913 [INFO][4141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17 Sep 5 00:03:35.940144 containerd[1498]: 2025-09-05 00:03:35.916 [INFO][4141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" host="localhost" Sep 5 00:03:35.940144 containerd[1498]: 2025-09-05 00:03:35.921 [INFO][4141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" host="localhost" Sep 5 00:03:35.940144 containerd[1498]: 2025-09-05 00:03:35.921 [INFO][4141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" host="localhost" Sep 5 00:03:35.940144 containerd[1498]: 2025-09-05 00:03:35.922 [INFO][4141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:35.940144 containerd[1498]: 2025-09-05 00:03:35.922 [INFO][4141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" HandleID="k8s-pod-network.5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Workload="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.940254 containerd[1498]: 2025-09-05 00:03:35.923 [INFO][4126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cz92t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f833ab8-22e9-482c-a3eb-e267ae46b5b8", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cz92t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ed158cde7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:35.940316 containerd[1498]: 2025-09-05 00:03:35.924 [INFO][4126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.940316 containerd[1498]: 2025-09-05 00:03:35.924 [INFO][4126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ed158cde7c ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.940316 containerd[1498]: 2025-09-05 00:03:35.927 [INFO][4126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.940427 containerd[1498]: 2025-09-05 00:03:35.927 [INFO][4126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cz92t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f833ab8-22e9-482c-a3eb-e267ae46b5b8", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17", Pod:"coredns-674b8bbfcf-cz92t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ed158cde7c", MAC:"82:74:42:a3:2d:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:35.940427 containerd[1498]: 2025-09-05 00:03:35.937 [INFO][4126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cz92t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cz92t-eth0" Sep 5 00:03:35.962479 containerd[1498]: time="2025-09-05T00:03:35.962409093Z" level=info msg="connecting to shim 5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17" address="unix:///run/containerd/s/3339b333d0a4c4445fcb457fb6a4333eb868835fa5373874e1b6aa5b6754be77" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:35.991584 systemd[1]: Started cri-containerd-5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17.scope - libcontainer container 5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17. Sep 5 00:03:36.003381 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:36.047617 containerd[1498]: time="2025-09-05T00:03:36.047576338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cz92t,Uid:8f833ab8-22e9-482c-a3eb-e267ae46b5b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17\"" Sep 5 00:03:36.052469 containerd[1498]: time="2025-09-05T00:03:36.052385791Z" level=info msg="CreateContainer within sandbox \"5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:03:36.059466 containerd[1498]: time="2025-09-05T00:03:36.059281913Z" level=info msg="Container 0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:36.064580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964940597.mount: Deactivated successfully. Sep 5 00:03:36.066890 containerd[1498]: time="2025-09-05T00:03:36.066783872Z" level=info msg="CreateContainer within sandbox \"5c939be7be28381a942873a83e8d1926203b58366fdb639a491d22d26d784b17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d\"" Sep 5 00:03:36.068369 containerd[1498]: time="2025-09-05T00:03:36.068336303Z" level=info msg="StartContainer for \"0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d\"" Sep 5 00:03:36.069196 containerd[1498]: time="2025-09-05T00:03:36.069162299Z" level=info msg="connecting to shim 0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d" address="unix:///run/containerd/s/3339b333d0a4c4445fcb457fb6a4333eb868835fa5373874e1b6aa5b6754be77" protocol=ttrpc version=3 Sep 5 00:03:36.091543 systemd[1]: Started cri-containerd-0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d.scope - libcontainer container 0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d. Sep 5 00:03:36.130626 containerd[1498]: time="2025-09-05T00:03:36.130591440Z" level=info msg="StartContainer for \"0cdb437ba42532bfeef4f0e949b7254317b4572fd8c007b4f7a5d48a3d5e4b9d\" returns successfully" Sep 5 00:03:36.832817 containerd[1498]: time="2025-09-05T00:03:36.832729853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488ffd8d-z7pjc,Uid:c5861876-7413-4f2b-beed-2f51ab539333,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:36.833804 containerd[1498]: time="2025-09-05T00:03:36.832875052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-x8lr2,Uid:651e4e42-bdd6-4023-8f33-2961e3821ab5,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:03:36.998779 kubelet[2644]: I0905 00:03:36.996586 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cz92t" podStartSLOduration=32.99657083 podStartE2EDuration="32.99657083s" podCreationTimestamp="2025-09-05 00:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:03:36.995976033 +0000 UTC m=+40.254923618" watchObservedRunningTime="2025-09-05 00:03:36.99657083 +0000 UTC m=+40.255518415" Sep 5 00:03:37.012617 systemd-networkd[1438]: cali0be18437447: Link UP Sep 5 00:03:37.013569 systemd-networkd[1438]: cali0be18437447: Gained carrier Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.868 [INFO][4259] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.891 [INFO][4259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0 calico-kube-controllers-6488ffd8d- calico-system c5861876-7413-4f2b-beed-2f51ab539333 802 0 2025-09-05 00:03:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6488ffd8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6488ffd8d-z7pjc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0be18437447 [] [] }} ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.892 [INFO][4259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.926 [INFO][4290] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" HandleID="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Workload="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.926 [INFO][4290] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" HandleID="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Workload="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006044e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6488ffd8d-z7pjc", "timestamp":"2025-09-05 00:03:36.926703415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.926 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.926 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.926 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.948 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.959 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.966 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.973 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.975 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.975 [INFO][4290] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.977 [INFO][4290] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.983 [INFO][4290] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.994 [INFO][4290] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.994 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" host="localhost" Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.994 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:37.029582 containerd[1498]: 2025-09-05 00:03:36.995 [INFO][4290] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" HandleID="k8s-pod-network.2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Workload="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.030364 containerd[1498]: 2025-09-05 00:03:37.002 [INFO][4259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0", GenerateName:"calico-kube-controllers-6488ffd8d-", Namespace:"calico-system", SelfLink:"", UID:"c5861876-7413-4f2b-beed-2f51ab539333", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6488ffd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6488ffd8d-z7pjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0be18437447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:37.030364 containerd[1498]: 2025-09-05 00:03:37.002 [INFO][4259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.030364 containerd[1498]: 2025-09-05 00:03:37.002 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0be18437447 ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.030364 containerd[1498]: 2025-09-05 00:03:37.014 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.030364 containerd[1498]: 2025-09-05 00:03:37.014 [INFO][4259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0", GenerateName:"calico-kube-controllers-6488ffd8d-", Namespace:"calico-system", SelfLink:"", UID:"c5861876-7413-4f2b-beed-2f51ab539333", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6488ffd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf", Pod:"calico-kube-controllers-6488ffd8d-z7pjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0be18437447", MAC:"ca:55:06:45:a3:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:37.030364 containerd[1498]: 2025-09-05 00:03:37.027 [INFO][4259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" Namespace="calico-system" Pod="calico-kube-controllers-6488ffd8d-z7pjc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6488ffd8d--z7pjc-eth0" Sep 5 00:03:37.147940 systemd-networkd[1438]: cali17bdef55911: Link UP Sep 5 00:03:37.148643 systemd-networkd[1438]: cali17bdef55911: Gained carrier Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.878 [INFO][4272] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.900 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0 calico-apiserver-5655588b8f- calico-apiserver 651e4e42-bdd6-4023-8f33-2961e3821ab5 806 0 2025-09-05 00:03:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5655588b8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5655588b8f-x8lr2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali17bdef55911 [] [] }} ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.900 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.934 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" HandleID="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Workload="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.934 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" HandleID="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Workload="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5655588b8f-x8lr2", "timestamp":"2025-09-05 00:03:36.934646891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.934 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.994 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:36.995 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.045 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.082 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.086 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.088 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.091 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.091 [INFO][4296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.092 [INFO][4296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5 Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.103 [INFO][4296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.128 [INFO][4296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.128 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" host="localhost" Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.128 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:37.162466 containerd[1498]: 2025-09-05 00:03:37.128 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" HandleID="k8s-pod-network.43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Workload="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.163001 containerd[1498]: 2025-09-05 00:03:37.137 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0", GenerateName:"calico-apiserver-5655588b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"651e4e42-bdd6-4023-8f33-2961e3821ab5", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655588b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5655588b8f-x8lr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17bdef55911", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:37.163001 containerd[1498]: 2025-09-05 00:03:37.137 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.163001 containerd[1498]: 2025-09-05 00:03:37.137 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17bdef55911 ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.163001 containerd[1498]: 2025-09-05 00:03:37.148 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.163001 containerd[1498]: 2025-09-05 00:03:37.149 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0", GenerateName:"calico-apiserver-5655588b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"651e4e42-bdd6-4023-8f33-2961e3821ab5", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655588b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5", Pod:"calico-apiserver-5655588b8f-x8lr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17bdef55911", MAC:"be:38:6c:3d:93:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:37.163001 containerd[1498]: 2025-09-05 00:03:37.159 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-x8lr2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--x8lr2-eth0" Sep 5 00:03:37.190682 containerd[1498]: time="2025-09-05T00:03:37.190591149Z" level=info msg="connecting to shim 2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf" address="unix:///run/containerd/s/e7740016c696b8ad00dc94488553f17713d3200db64f72f99e33a0ef90441d13" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:37.215518 containerd[1498]: time="2025-09-05T00:03:37.212158673Z" level=info msg="connecting to shim 43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5" address="unix:///run/containerd/s/c86a300032832829dae648b896d936dd286e21828fb10e4af1bde337c8a69516" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:37.240595 systemd[1]: Started cri-containerd-2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf.scope - libcontainer container 2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf. Sep 5 00:03:37.254628 systemd[1]: Started cri-containerd-43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5.scope - libcontainer container 43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5. Sep 5 00:03:37.269897 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:37.271533 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:37.273779 kubelet[2644]: I0905 00:03:37.273752 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:03:37.310396 containerd[1498]: time="2025-09-05T00:03:37.310333587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-x8lr2,Uid:651e4e42-bdd6-4023-8f33-2961e3821ab5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5\"" Sep 5 00:03:37.311624 containerd[1498]: time="2025-09-05T00:03:37.311583660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6488ffd8d-z7pjc,Uid:c5861876-7413-4f2b-beed-2f51ab539333,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf\"" Sep 5 00:03:37.313543 containerd[1498]: time="2025-09-05T00:03:37.313299931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:03:37.388627 systemd-networkd[1438]: cali5ed158cde7c: Gained IPv6LL Sep 5 00:03:38.255049 systemd-networkd[1438]: vxlan.calico: Link UP Sep 5 00:03:38.255060 systemd-networkd[1438]: vxlan.calico: Gained carrier Sep 5 00:03:38.831419 containerd[1498]: time="2025-09-05T00:03:38.831133904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fs25,Uid:6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:38.831836 containerd[1498]: time="2025-09-05T00:03:38.831523742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-wpbgb,Uid:8ed13891-25f7-4c98-b611-d4d3148e80ea,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:03:38.924564 systemd-networkd[1438]: cali0be18437447: Gained IPv6LL Sep 5 00:03:38.966162 systemd-networkd[1438]: cali2febb07d831: Link UP Sep 5 00:03:38.967355 systemd-networkd[1438]: cali2febb07d831: Gained carrier Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.900 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0 calico-apiserver-5655588b8f- calico-apiserver 8ed13891-25f7-4c98-b611-d4d3148e80ea 808 0 2025-09-05 00:03:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5655588b8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5655588b8f-wpbgb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2febb07d831 [] [] }} ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.900 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.923 [INFO][4614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" HandleID="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Workload="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.924 [INFO][4614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" HandleID="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Workload="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5655588b8f-wpbgb", "timestamp":"2025-09-05 00:03:38.923968579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.924 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.924 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.924 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.934 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.941 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.945 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.947 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.949 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.949 [INFO][4614] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.950 [INFO][4614] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59 Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.953 [INFO][4614] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.959 [INFO][4614] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.959 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" host="localhost" Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.959 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:38.980836 containerd[1498]: 2025-09-05 00:03:38.959 [INFO][4614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" HandleID="k8s-pod-network.02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Workload="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:38.981352 containerd[1498]: 2025-09-05 00:03:38.963 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0", GenerateName:"calico-apiserver-5655588b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ed13891-25f7-4c98-b611-d4d3148e80ea", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655588b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5655588b8f-wpbgb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2febb07d831", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:38.981352 containerd[1498]: 2025-09-05 00:03:38.964 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:38.981352 containerd[1498]: 2025-09-05 00:03:38.964 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2febb07d831 ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:38.981352 containerd[1498]: 2025-09-05 00:03:38.967 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:38.981352 containerd[1498]: 2025-09-05 00:03:38.967 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0", GenerateName:"calico-apiserver-5655588b8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ed13891-25f7-4c98-b611-d4d3148e80ea", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5655588b8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59", Pod:"calico-apiserver-5655588b8f-wpbgb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2febb07d831", MAC:"c2:f7:06:1d:b1:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:38.981352 containerd[1498]: 2025-09-05 00:03:38.978 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" Namespace="calico-apiserver" Pod="calico-apiserver-5655588b8f-wpbgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5655588b8f--wpbgb-eth0" Sep 5 00:03:39.001082 containerd[1498]: time="2025-09-05T00:03:39.001043656Z" level=info msg="connecting to shim 02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59" address="unix:///run/containerd/s/ce1013812e9b8edb74ad9675f052616c810b5f13594dbdc456825016e6ba37d2" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:39.024663 systemd[1]: Started cri-containerd-02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59.scope - libcontainer container 02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59. Sep 5 00:03:39.038498 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:39.052611 systemd-networkd[1438]: cali17bdef55911: Gained IPv6LL Sep 5 00:03:39.068396 containerd[1498]: time="2025-09-05T00:03:39.068217274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5655588b8f-wpbgb,Uid:8ed13891-25f7-4c98-b611-d4d3148e80ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59\"" Sep 5 00:03:39.075177 systemd-networkd[1438]: caliac93eb8fd33: Link UP Sep 5 00:03:39.075859 systemd-networkd[1438]: caliac93eb8fd33: Gained carrier Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.892 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4fs25-eth0 csi-node-driver- calico-system 6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580 707 0 2025-09-05 00:03:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4fs25 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliac93eb8fd33 [] [] }} ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.892 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.930 [INFO][4608] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" HandleID="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Workload="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.931 [INFO][4608] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" HandleID="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Workload="localhost-k8s-csi--node--driver--4fs25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4fs25", "timestamp":"2025-09-05 00:03:38.930935263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.931 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.959 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:38.959 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.035 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.041 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.046 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.048 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.052 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.052 [INFO][4608] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.054 [INFO][4608] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606 Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.058 [INFO][4608] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.066 [INFO][4608] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.066 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" host="localhost" Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.067 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:39.090551 containerd[1498]: 2025-09-05 00:03:39.067 [INFO][4608] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" HandleID="k8s-pod-network.e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Workload="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.091107 containerd[1498]: 2025-09-05 00:03:39.070 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4fs25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4fs25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac93eb8fd33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:39.091107 containerd[1498]: 2025-09-05 00:03:39.071 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.091107 containerd[1498]: 2025-09-05 00:03:39.071 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac93eb8fd33 ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.091107 containerd[1498]: 2025-09-05 00:03:39.076 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.091107 containerd[1498]: 2025-09-05 00:03:39.076 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4fs25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606", Pod:"csi-node-driver-4fs25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac93eb8fd33", MAC:"a2:67:8d:a2:9f:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:39.091107 containerd[1498]: 2025-09-05 00:03:39.085 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" Namespace="calico-system" Pod="csi-node-driver-4fs25" WorkloadEndpoint="localhost-k8s-csi--node--driver--4fs25-eth0" Sep 5 00:03:39.107878 containerd[1498]: time="2025-09-05T00:03:39.107795272Z" level=info msg="connecting to shim e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606" address="unix:///run/containerd/s/e77ac5edf6e98ab86f548199a02147912ca7fefe55179d26552344ec5bf1ae32" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:39.131609 systemd[1]: Started cri-containerd-e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606.scope - libcontainer container e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606. Sep 5 00:03:39.145974 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:39.161752 containerd[1498]: time="2025-09-05T00:03:39.161715637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fs25,Uid:6bfd9129-15f0-4f2a-bd2f-cf1ce0ff2580,Namespace:calico-system,Attempt:0,} returns sandbox id \"e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606\"" Sep 5 00:03:39.831153 containerd[1498]: time="2025-09-05T00:03:39.831097105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxg5r,Uid:1b310bae-11fe-4356-979f-929d2920e4fe,Namespace:kube-system,Attempt:0,}" Sep 5 00:03:39.844348 containerd[1498]: time="2025-09-05T00:03:39.843984279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lqqg4,Uid:9fdc45e1-c13b-4a58-b814-a8f24e60f8eb,Namespace:calico-system,Attempt:0,}" Sep 5 00:03:39.884803 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Sep 5 00:03:40.018410 systemd-networkd[1438]: caliebbe7efa48a: Link UP Sep 5 00:03:40.020822 systemd-networkd[1438]: caliebbe7efa48a: Gained carrier Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.887 [INFO][4735] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0 coredns-674b8bbfcf- kube-system 1b310bae-11fe-4356-979f-929d2920e4fe 804 0 2025-09-05 00:03:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fxg5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebbe7efa48a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.887 [INFO][4735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.946 [INFO][4762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" HandleID="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Workload="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.946 [INFO][4762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" HandleID="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Workload="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fxg5r", "timestamp":"2025-09-05 00:03:39.946015519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.946 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.946 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.946 [INFO][4762] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.957 [INFO][4762] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.961 [INFO][4762] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.966 [INFO][4762] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.968 [INFO][4762] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.973 [INFO][4762] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.973 [INFO][4762] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.975 [INFO][4762] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9 Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:39.985 [INFO][4762] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:40.010 [INFO][4762] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:40.010 [INFO][4762] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" host="localhost" Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:40.010 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:40.045043 containerd[1498]: 2025-09-05 00:03:40.011 [INFO][4762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" HandleID="k8s-pod-network.e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Workload="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.046228 containerd[1498]: 2025-09-05 00:03:40.014 [INFO][4735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1b310bae-11fe-4356-979f-929d2920e4fe", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fxg5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebbe7efa48a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:40.046228 containerd[1498]: 2025-09-05 00:03:40.014 [INFO][4735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.046228 containerd[1498]: 2025-09-05 00:03:40.014 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebbe7efa48a ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.046228 containerd[1498]: 2025-09-05 00:03:40.019 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.046228 containerd[1498]: 2025-09-05 00:03:40.020 [INFO][4735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1b310bae-11fe-4356-979f-929d2920e4fe", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9", Pod:"coredns-674b8bbfcf-fxg5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebbe7efa48a", MAC:"6a:42:e7:a8:98:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:40.046228 containerd[1498]: 2025-09-05 00:03:40.042 [INFO][4735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-fxg5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fxg5r-eth0" Sep 5 00:03:40.096313 systemd-networkd[1438]: calie88e6d7a359: Link UP Sep 5 00:03:40.096483 systemd-networkd[1438]: calie88e6d7a359: Gained carrier Sep 5 00:03:40.106500 containerd[1498]: time="2025-09-05T00:03:40.104983521Z" level=info msg="connecting to shim e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9" address="unix:///run/containerd/s/2963cae4626c8918ef81034c3a5de5a0efc5878e5ac3bb108429c427219981ed" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:39.924 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--lqqg4-eth0 goldmane-54d579b49d- calico-system 9fdc45e1-c13b-4a58-b814-a8f24e60f8eb 807 0 2025-09-05 00:03:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-lqqg4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie88e6d7a359 [] [] }} ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:39.924 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:39.959 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" HandleID="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Workload="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:39.959 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" HandleID="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Workload="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000434100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-lqqg4", "timestamp":"2025-09-05 00:03:39.959363171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:39.959 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.010 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.011 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.058 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.064 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.069 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.071 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.073 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.073 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.075 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3 Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.083 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.090 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.090 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" host="localhost" Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.090 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:03:40.123801 containerd[1498]: 2025-09-05 00:03:40.090 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" HandleID="k8s-pod-network.d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Workload="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.124299 containerd[1498]: 2025-09-05 00:03:40.094 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--lqqg4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9fdc45e1-c13b-4a58-b814-a8f24e60f8eb", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-lqqg4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie88e6d7a359", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:40.124299 containerd[1498]: 2025-09-05 00:03:40.094 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.124299 containerd[1498]: 2025-09-05 00:03:40.094 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie88e6d7a359 ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.124299 containerd[1498]: 2025-09-05 00:03:40.096 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.124299 containerd[1498]: 2025-09-05 00:03:40.097 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--lqqg4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9fdc45e1-c13b-4a58-b814-a8f24e60f8eb", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 3, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3", Pod:"goldmane-54d579b49d-lqqg4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie88e6d7a359", MAC:"4e:29:9f:f8:d9:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:03:40.124299 containerd[1498]: 2025-09-05 00:03:40.113 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" Namespace="calico-system" Pod="goldmane-54d579b49d-lqqg4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--lqqg4-eth0" Sep 5 00:03:40.147591 systemd[1]: Started cri-containerd-e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9.scope - libcontainer container e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9. Sep 5 00:03:40.151653 containerd[1498]: time="2025-09-05T00:03:40.151610249Z" level=info msg="connecting to shim d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3" address="unix:///run/containerd/s/afa462217871590f5b1fad771d52adc4cd6396793536f2418df977c1521093ba" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:03:40.169456 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:40.194774 systemd[1]: Started cri-containerd-d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3.scope - libcontainer container d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3. Sep 5 00:03:40.236360 containerd[1498]: time="2025-09-05T00:03:40.236301867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fxg5r,Uid:1b310bae-11fe-4356-979f-929d2920e4fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9\"" Sep 5 00:03:40.247445 containerd[1498]: time="2025-09-05T00:03:40.247382972Z" level=info msg="CreateContainer within sandbox \"e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:03:40.263013 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:03:40.263527 containerd[1498]: time="2025-09-05T00:03:40.263476132Z" level=info msg="Container b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:40.271677 containerd[1498]: time="2025-09-05T00:03:40.271621371Z" level=info msg="CreateContainer within sandbox \"e71883e0a378c9efb2668b5d126d8ec79cde3fed3db7130da84dcbd6e59757c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb\"" Sep 5 00:03:40.272458 containerd[1498]: time="2025-09-05T00:03:40.272232848Z" level=info msg="StartContainer for \"b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb\"" Sep 5 00:03:40.273502 containerd[1498]: time="2025-09-05T00:03:40.273051684Z" level=info msg="connecting to shim b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb" address="unix:///run/containerd/s/2963cae4626c8918ef81034c3a5de5a0efc5878e5ac3bb108429c427219981ed" protocol=ttrpc version=3 Sep 5 00:03:40.275876 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:43924.service - OpenSSH per-connection server daemon (10.0.0.1:43924). Sep 5 00:03:40.298690 systemd[1]: Started cri-containerd-b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb.scope - libcontainer container b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb. Sep 5 00:03:40.306818 containerd[1498]: time="2025-09-05T00:03:40.306641117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lqqg4,Uid:9fdc45e1-c13b-4a58-b814-a8f24e60f8eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3\"" Sep 5 00:03:40.343203 containerd[1498]: time="2025-09-05T00:03:40.343082016Z" level=info msg="StartContainer for \"b0638edc171cef5f26c39f77c60528f02e40cfcee6797c74320571cb27d989cb\" returns successfully" Sep 5 00:03:40.345479 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 43924 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:40.347771 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:40.353737 systemd-logind[1480]: New session 8 of user core. Sep 5 00:03:40.365659 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:03:40.396560 systemd-networkd[1438]: caliac93eb8fd33: Gained IPv6LL Sep 5 00:03:40.460894 systemd-networkd[1438]: cali2febb07d831: Gained IPv6LL Sep 5 00:03:40.649713 sshd[4939]: Connection closed by 10.0.0.1 port 43924 Sep 5 00:03:40.649975 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:40.656171 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:03:40.656927 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:43924.service: Deactivated successfully. Sep 5 00:03:40.659784 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:03:40.662456 systemd-logind[1480]: Removed session 8. Sep 5 00:03:40.797157 containerd[1498]: time="2025-09-05T00:03:40.797102235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:40.797699 containerd[1498]: time="2025-09-05T00:03:40.797670473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 00:03:40.798512 containerd[1498]: time="2025-09-05T00:03:40.798480509Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:40.800648 containerd[1498]: time="2025-09-05T00:03:40.800614578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:40.801180 containerd[1498]: time="2025-09-05T00:03:40.801142815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.487814524s" Sep 5 00:03:40.801217 containerd[1498]: time="2025-09-05T00:03:40.801177575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:03:40.802860 containerd[1498]: time="2025-09-05T00:03:40.802597568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:03:40.806091 containerd[1498]: time="2025-09-05T00:03:40.806053191Z" level=info msg="CreateContainer within sandbox \"43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:03:40.815478 containerd[1498]: time="2025-09-05T00:03:40.815427904Z" level=info msg="Container 016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:40.824562 containerd[1498]: time="2025-09-05T00:03:40.824515499Z" level=info msg="CreateContainer within sandbox \"43b20707c9b8d824b1bb37a68c819158c8cfa421ce59d65ebf786b4295ab69d5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c\"" Sep 5 00:03:40.825097 containerd[1498]: time="2025-09-05T00:03:40.825070936Z" level=info msg="StartContainer for \"016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c\"" Sep 5 00:03:40.826088 containerd[1498]: time="2025-09-05T00:03:40.826047971Z" level=info msg="connecting to shim 016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c" address="unix:///run/containerd/s/c86a300032832829dae648b896d936dd286e21828fb10e4af1bde337c8a69516" protocol=ttrpc version=3 Sep 5 00:03:40.851617 systemd[1]: Started cri-containerd-016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c.scope - libcontainer container 016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c. Sep 5 00:03:40.892411 containerd[1498]: time="2025-09-05T00:03:40.892359401Z" level=info msg="StartContainer for \"016f699c90fd6122a8801bc34d8d9a5d99fde750494808b1bf90198674a3450c\" returns successfully" Sep 5 00:03:41.013840 kubelet[2644]: I0905 00:03:41.013772 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5655588b8f-x8lr2" podStartSLOduration=23.524471801 podStartE2EDuration="27.013754558s" podCreationTimestamp="2025-09-05 00:03:14 +0000 UTC" firstStartedPulling="2025-09-05 00:03:37.312910493 +0000 UTC m=+40.571858078" lastFinishedPulling="2025-09-05 00:03:40.80219329 +0000 UTC m=+44.061140835" observedRunningTime="2025-09-05 00:03:41.011750168 +0000 UTC m=+44.270697753" watchObservedRunningTime="2025-09-05 00:03:41.013754558 +0000 UTC m=+44.272702223" Sep 5 00:03:41.028473 kubelet[2644]: I0905 00:03:41.027160 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fxg5r" podStartSLOduration=37.027144133 podStartE2EDuration="37.027144133s" podCreationTimestamp="2025-09-05 00:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:03:41.027066734 +0000 UTC m=+44.286014319" watchObservedRunningTime="2025-09-05 00:03:41.027144133 +0000 UTC m=+44.286091718" Sep 5 00:03:41.295962 systemd-networkd[1438]: caliebbe7efa48a: Gained IPv6LL Sep 5 00:03:41.613557 systemd-networkd[1438]: calie88e6d7a359: Gained IPv6LL Sep 5 00:03:42.830999 containerd[1498]: time="2025-09-05T00:03:42.830890245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:42.831773 containerd[1498]: time="2025-09-05T00:03:42.831561282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 00:03:42.832614 containerd[1498]: time="2025-09-05T00:03:42.832341158Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:42.838028 containerd[1498]: time="2025-09-05T00:03:42.837955172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:42.838847 containerd[1498]: time="2025-09-05T00:03:42.838773888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.03614428s" Sep 5 00:03:42.838847 containerd[1498]: time="2025-09-05T00:03:42.838802848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 00:03:42.840277 containerd[1498]: time="2025-09-05T00:03:42.840022162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:03:42.848447 containerd[1498]: time="2025-09-05T00:03:42.848393362Z" level=info msg="CreateContainer within sandbox \"2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:03:42.857512 containerd[1498]: time="2025-09-05T00:03:42.857025081Z" level=info msg="Container 39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:42.867767 containerd[1498]: time="2025-09-05T00:03:42.867722950Z" level=info msg="CreateContainer within sandbox \"2f4ba08c2c5592080c5f5df4719140dafac0fc4a461a2beeca1284d8ff3b9adf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d\"" Sep 5 00:03:42.868216 containerd[1498]: time="2025-09-05T00:03:42.868192348Z" level=info msg="StartContainer for \"39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d\"" Sep 5 00:03:42.869256 containerd[1498]: time="2025-09-05T00:03:42.869224823Z" level=info msg="connecting to shim 39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d" address="unix:///run/containerd/s/e7740016c696b8ad00dc94488553f17713d3200db64f72f99e33a0ef90441d13" protocol=ttrpc version=3 Sep 5 00:03:42.893858 systemd[1]: Started cri-containerd-39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d.scope - libcontainer container 39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d. Sep 5 00:03:42.956434 containerd[1498]: time="2025-09-05T00:03:42.956382728Z" level=info msg="StartContainer for \"39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d\" returns successfully" Sep 5 00:03:43.023202 kubelet[2644]: I0905 00:03:43.023130 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6488ffd8d-z7pjc" podStartSLOduration=19.49653402 podStartE2EDuration="25.023112212s" podCreationTimestamp="2025-09-05 00:03:18 +0000 UTC" firstStartedPulling="2025-09-05 00:03:37.313250531 +0000 UTC m=+40.572198116" lastFinishedPulling="2025-09-05 00:03:42.839828723 +0000 UTC m=+46.098776308" observedRunningTime="2025-09-05 00:03:43.022330656 +0000 UTC m=+46.281278241" watchObservedRunningTime="2025-09-05 00:03:43.023112212 +0000 UTC m=+46.282059757" Sep 5 00:03:43.099202 containerd[1498]: time="2025-09-05T00:03:43.099039659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:43.100467 containerd[1498]: time="2025-09-05T00:03:43.100319573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:03:43.102467 containerd[1498]: time="2025-09-05T00:03:43.102316923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 262.259801ms" Sep 5 00:03:43.102467 containerd[1498]: time="2025-09-05T00:03:43.102354363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:03:43.116925 containerd[1498]: time="2025-09-05T00:03:43.116890615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:03:43.121387 containerd[1498]: time="2025-09-05T00:03:43.120669238Z" level=info msg="CreateContainer within sandbox \"02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:03:43.129947 containerd[1498]: time="2025-09-05T00:03:43.129909915Z" level=info msg="Container b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:43.137153 containerd[1498]: time="2025-09-05T00:03:43.137103761Z" level=info msg="CreateContainer within sandbox \"02203b649f3cac442db29b0b298a63f2c0b8ea1d6d60c50948333d41d8a69c59\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8\"" Sep 5 00:03:43.138194 containerd[1498]: time="2025-09-05T00:03:43.138018397Z" level=info msg="StartContainer for \"b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8\"" Sep 5 00:03:43.140101 containerd[1498]: time="2025-09-05T00:03:43.140032867Z" level=info msg="connecting to shim b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8" address="unix:///run/containerd/s/ce1013812e9b8edb74ad9675f052616c810b5f13594dbdc456825016e6ba37d2" protocol=ttrpc version=3 Sep 5 00:03:43.170648 systemd[1]: Started cri-containerd-b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8.scope - libcontainer container b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8. Sep 5 00:03:43.216863 containerd[1498]: time="2025-09-05T00:03:43.216676710Z" level=info msg="StartContainer for \"b4e1400290fef4a347d8d1f45e60de4a4f6d5396ab26e3acf434876bf2ae8ba8\" returns successfully" Sep 5 00:03:44.011414 kubelet[2644]: I0905 00:03:44.011162 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:03:44.465553 containerd[1498]: time="2025-09-05T00:03:44.465063615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:44.465909 containerd[1498]: time="2025-09-05T00:03:44.465808492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 00:03:44.466826 containerd[1498]: time="2025-09-05T00:03:44.466795727Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:44.470096 containerd[1498]: time="2025-09-05T00:03:44.470000752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:44.470685 containerd[1498]: time="2025-09-05T00:03:44.470660189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.353730574s" Sep 5 00:03:44.470740 containerd[1498]: time="2025-09-05T00:03:44.470686629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 00:03:44.472391 containerd[1498]: time="2025-09-05T00:03:44.472316462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:03:44.476423 containerd[1498]: time="2025-09-05T00:03:44.476379083Z" level=info msg="CreateContainer within sandbox \"e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:03:44.500764 containerd[1498]: time="2025-09-05T00:03:44.500726332Z" level=info msg="Container ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:44.502377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627511173.mount: Deactivated successfully. Sep 5 00:03:44.513161 containerd[1498]: time="2025-09-05T00:03:44.513005516Z" level=info msg="CreateContainer within sandbox \"e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025\"" Sep 5 00:03:44.515992 containerd[1498]: time="2025-09-05T00:03:44.515078707Z" level=info msg="StartContainer for \"ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025\"" Sep 5 00:03:44.518806 containerd[1498]: time="2025-09-05T00:03:44.518770530Z" level=info msg="connecting to shim ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025" address="unix:///run/containerd/s/e77ac5edf6e98ab86f548199a02147912ca7fefe55179d26552344ec5bf1ae32" protocol=ttrpc version=3 Sep 5 00:03:44.540620 systemd[1]: Started cri-containerd-ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025.scope - libcontainer container ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025. Sep 5 00:03:44.575108 containerd[1498]: time="2025-09-05T00:03:44.575066793Z" level=info msg="StartContainer for \"ddef01c3a0c6b767c1aa660b10c311fece2738bcca020b9ae6df86d4f8c5d025\" returns successfully" Sep 5 00:03:45.582723 kubelet[2644]: I0905 00:03:45.582419 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5655588b8f-wpbgb" podStartSLOduration=27.53579345 podStartE2EDuration="31.582399482s" podCreationTimestamp="2025-09-05 00:03:14 +0000 UTC" firstStartedPulling="2025-09-05 00:03:39.070081544 +0000 UTC m=+42.329029089" lastFinishedPulling="2025-09-05 00:03:43.116687536 +0000 UTC m=+46.375635121" observedRunningTime="2025-09-05 00:03:44.031152157 +0000 UTC m=+47.290099742" watchObservedRunningTime="2025-09-05 00:03:45.582399482 +0000 UTC m=+48.841347107" Sep 5 00:03:45.665062 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:44056.service - OpenSSH per-connection server daemon (10.0.0.1:44056). Sep 5 00:03:45.774432 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 44056 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:45.776750 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:45.784897 systemd-logind[1480]: New session 9 of user core. Sep 5 00:03:45.788627 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:03:46.047635 sshd[5135]: Connection closed by 10.0.0.1 port 44056 Sep 5 00:03:46.048691 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:46.054822 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:03:46.056758 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:44056.service: Deactivated successfully. Sep 5 00:03:46.059790 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:03:46.062280 systemd-logind[1480]: Removed session 9. Sep 5 00:03:46.272414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822198784.mount: Deactivated successfully. Sep 5 00:03:47.134808 containerd[1498]: time="2025-09-05T00:03:47.134757068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:47.135784 containerd[1498]: time="2025-09-05T00:03:47.135553905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 00:03:47.136512 containerd[1498]: time="2025-09-05T00:03:47.136466981Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:47.140416 containerd[1498]: time="2025-09-05T00:03:47.140369324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:47.141425 containerd[1498]: time="2025-09-05T00:03:47.141382280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.669034498s" Sep 5 00:03:47.141425 containerd[1498]: time="2025-09-05T00:03:47.141420400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 00:03:47.142860 containerd[1498]: time="2025-09-05T00:03:47.142832194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:03:47.147213 containerd[1498]: time="2025-09-05T00:03:47.147169775Z" level=info msg="CreateContainer within sandbox \"d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:03:47.155856 containerd[1498]: time="2025-09-05T00:03:47.155712418Z" level=info msg="Container 37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:47.163177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191047419.mount: Deactivated successfully. Sep 5 00:03:47.209979 containerd[1498]: time="2025-09-05T00:03:47.209796584Z" level=info msg="CreateContainer within sandbox \"d7d7b26c6d7ea5c705947b1a64f3cd20f86f13e3d12d189f71b38c93e53282c3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3\"" Sep 5 00:03:47.210559 containerd[1498]: time="2025-09-05T00:03:47.210531621Z" level=info msg="StartContainer for \"37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3\"" Sep 5 00:03:47.212301 containerd[1498]: time="2025-09-05T00:03:47.212223454Z" level=info msg="connecting to shim 37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3" address="unix:///run/containerd/s/afa462217871590f5b1fad771d52adc4cd6396793536f2418df977c1521093ba" protocol=ttrpc version=3 Sep 5 00:03:47.242670 systemd[1]: Started cri-containerd-37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3.scope - libcontainer container 37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3. Sep 5 00:03:47.289426 containerd[1498]: time="2025-09-05T00:03:47.289390560Z" level=info msg="StartContainer for \"37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3\" returns successfully" Sep 5 00:03:48.039462 kubelet[2644]: I0905 00:03:48.039049 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-lqqg4" podStartSLOduration=23.206856666 podStartE2EDuration="30.039030802s" podCreationTimestamp="2025-09-05 00:03:18 +0000 UTC" firstStartedPulling="2025-09-05 00:03:40.309935541 +0000 UTC m=+43.568883126" lastFinishedPulling="2025-09-05 00:03:47.142109677 +0000 UTC m=+50.401057262" observedRunningTime="2025-09-05 00:03:48.038418405 +0000 UTC m=+51.297365990" watchObservedRunningTime="2025-09-05 00:03:48.039030802 +0000 UTC m=+51.297978387" Sep 5 00:03:48.193122 containerd[1498]: time="2025-09-05T00:03:48.193054907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3\" id:\"3c67c1a20798471ffe041c8478d6743ee8b2d464daac9142ba39a92a53d60e94\" pid:5212 exit_status:1 exited_at:{seconds:1757030628 nanos:188821685}" Sep 5 00:03:48.230363 kubelet[2644]: I0905 00:03:48.230315 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:03:48.324766 containerd[1498]: time="2025-09-05T00:03:48.324349909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea\" id:\"6985d1152ae8b917c322b1f5888806721571cb7d977a479e8b13406fc88ad650\" pid:5237 exited_at:{seconds:1757030628 nanos:324065030}" Sep 5 00:03:48.430140 containerd[1498]: time="2025-09-05T00:03:48.430099380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20a3a12228d3420d167af1258a8efb0c55c0996af6533186e1122a7bd6d8a4ea\" id:\"54355e6458346dc26098f617fcae326ed6713a818ebffd994016b8d3133836e3\" pid:5266 exited_at:{seconds:1757030628 nanos:429734621}" Sep 5 00:03:48.558763 containerd[1498]: time="2025-09-05T00:03:48.558710833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:48.560275 containerd[1498]: time="2025-09-05T00:03:48.560223546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 00:03:48.561421 containerd[1498]: time="2025-09-05T00:03:48.561383502Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:48.563448 containerd[1498]: time="2025-09-05T00:03:48.563364853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:03:48.564015 containerd[1498]: time="2025-09-05T00:03:48.563982690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.421115337s" Sep 5 00:03:48.564087 containerd[1498]: time="2025-09-05T00:03:48.564018290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 00:03:48.570206 containerd[1498]: time="2025-09-05T00:03:48.570166024Z" level=info msg="CreateContainer within sandbox \"e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:03:48.576131 containerd[1498]: time="2025-09-05T00:03:48.575843960Z" level=info msg="Container 27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:03:48.585457 containerd[1498]: time="2025-09-05T00:03:48.585296080Z" level=info msg="CreateContainer within sandbox \"e99e0a12af00c1a8a55ac6d5bc6412d322be5b303fb5da2d61eaebbfcbc58606\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354\"" Sep 5 00:03:48.586176 containerd[1498]: time="2025-09-05T00:03:48.585853877Z" level=info msg="StartContainer for \"27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354\"" Sep 5 00:03:48.587632 containerd[1498]: time="2025-09-05T00:03:48.587604470Z" level=info msg="connecting to shim 27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354" address="unix:///run/containerd/s/e77ac5edf6e98ab86f548199a02147912ca7fefe55179d26552344ec5bf1ae32" protocol=ttrpc version=3 Sep 5 00:03:48.610642 systemd[1]: Started cri-containerd-27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354.scope - libcontainer container 27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354. Sep 5 00:03:48.647302 containerd[1498]: time="2025-09-05T00:03:48.647264776Z" level=info msg="StartContainer for \"27ffde473230c6ac9c3b0f275e19e8d7dd7e61468304a4c12e60f033bb3b6354\" returns successfully" Sep 5 00:03:48.911166 kubelet[2644]: I0905 00:03:48.911053 2644 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:03:48.918679 kubelet[2644]: I0905 00:03:48.918626 2644 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:03:49.109741 containerd[1498]: time="2025-09-05T00:03:49.109698258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37512831b39fc0154a396550664707a29980848016df5732e9755e96050d32e3\" id:\"a02d8e6810315d9ed673eb7649d91468c1b2007eeab50f01bcc721b90c8a5869\" pid:5334 exit_status:1 exited_at:{seconds:1757030629 nanos:109350619}" Sep 5 00:03:51.063758 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:50840.service - OpenSSH per-connection server daemon (10.0.0.1:50840). Sep 5 00:03:51.132307 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 50840 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:51.134017 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:51.138679 systemd-logind[1480]: New session 10 of user core. Sep 5 00:03:51.150728 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:03:51.361583 sshd[5351]: Connection closed by 10.0.0.1 port 50840 Sep 5 00:03:51.361344 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:51.371609 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:50840.service: Deactivated successfully. Sep 5 00:03:51.374870 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:03:51.375795 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:03:51.378361 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:50842.service - OpenSSH per-connection server daemon (10.0.0.1:50842). Sep 5 00:03:51.380238 systemd-logind[1480]: Removed session 10. Sep 5 00:03:51.448166 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 50842 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:51.449566 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:51.457558 systemd-logind[1480]: New session 11 of user core. Sep 5 00:03:51.469106 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:03:51.686423 sshd[5374]: Connection closed by 10.0.0.1 port 50842 Sep 5 00:03:51.687464 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:51.697972 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:50842.service: Deactivated successfully. Sep 5 00:03:51.700664 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:03:51.702195 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:03:51.707011 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:50848.service - OpenSSH per-connection server daemon (10.0.0.1:50848). Sep 5 00:03:51.708364 systemd-logind[1480]: Removed session 11. Sep 5 00:03:51.763194 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 50848 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:51.765077 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:51.770046 systemd-logind[1480]: New session 12 of user core. Sep 5 00:03:51.775617 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:03:51.932905 sshd[5388]: Connection closed by 10.0.0.1 port 50848 Sep 5 00:03:51.933244 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:51.936636 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:50848.service: Deactivated successfully. Sep 5 00:03:51.939267 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:03:51.940392 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:03:51.942018 systemd-logind[1480]: Removed session 12. Sep 5 00:03:54.063467 kubelet[2644]: I0905 00:03:54.063248 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:03:54.104310 containerd[1498]: time="2025-09-05T00:03:54.104258941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d\" id:\"d1f937a17676c07f2bf41419749196ce47aa5ccea24c4256604732c654f69d63\" pid:5416 exited_at:{seconds:1757030634 nanos:103949543}" Sep 5 00:03:54.121263 kubelet[2644]: I0905 00:03:54.121184 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4fs25" podStartSLOduration=26.719256299 podStartE2EDuration="36.121164475s" podCreationTimestamp="2025-09-05 00:03:18 +0000 UTC" firstStartedPulling="2025-09-05 00:03:39.162869351 +0000 UTC m=+42.421816936" lastFinishedPulling="2025-09-05 00:03:48.564777527 +0000 UTC m=+51.823725112" observedRunningTime="2025-09-05 00:03:49.050127147 +0000 UTC m=+52.309074732" watchObservedRunningTime="2025-09-05 00:03:54.121164475 +0000 UTC m=+57.380112060" Sep 5 00:03:54.148411 containerd[1498]: time="2025-09-05T00:03:54.147417733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d\" id:\"a1933236ab0d98cc7f21177bdddb69953c75486749081a72cbd5583f3a1db743\" pid:5438 exited_at:{seconds:1757030634 nanos:147015654}" Sep 5 00:03:56.947821 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:50998.service - OpenSSH per-connection server daemon (10.0.0.1:50998). Sep 5 00:03:57.005530 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 50998 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:57.006863 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:57.011298 systemd-logind[1480]: New session 13 of user core. Sep 5 00:03:57.030612 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:03:57.182973 sshd[5453]: Connection closed by 10.0.0.1 port 50998 Sep 5 00:03:57.185304 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:57.194985 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:50998.service: Deactivated successfully. Sep 5 00:03:57.201330 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:03:57.204499 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:03:57.208321 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:51012.service - OpenSSH per-connection server daemon (10.0.0.1:51012). Sep 5 00:03:57.209042 systemd-logind[1480]: Removed session 13. Sep 5 00:03:57.271912 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 51012 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:57.273253 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:57.277129 systemd-logind[1480]: New session 14 of user core. Sep 5 00:03:57.289589 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:03:57.482430 sshd[5469]: Connection closed by 10.0.0.1 port 51012 Sep 5 00:03:57.482881 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:57.493697 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:51012.service: Deactivated successfully. Sep 5 00:03:57.496863 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:03:57.498505 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:03:57.499917 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:51016.service - OpenSSH per-connection server daemon (10.0.0.1:51016). Sep 5 00:03:57.501372 systemd-logind[1480]: Removed session 14. Sep 5 00:03:57.565080 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 51016 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:57.566525 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:57.570604 systemd-logind[1480]: New session 15 of user core. Sep 5 00:03:57.576762 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:03:58.262354 sshd[5482]: Connection closed by 10.0.0.1 port 51016 Sep 5 00:03:58.262860 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:58.274034 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:51016.service: Deactivated successfully. Sep 5 00:03:58.276458 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:03:58.279222 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:03:58.282199 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:51018.service - OpenSSH per-connection server daemon (10.0.0.1:51018). Sep 5 00:03:58.285253 systemd-logind[1480]: Removed session 15. Sep 5 00:03:58.344790 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 51018 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:58.346399 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:58.350636 systemd-logind[1480]: New session 16 of user core. Sep 5 00:03:58.360614 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:03:58.650658 sshd[5509]: Connection closed by 10.0.0.1 port 51018 Sep 5 00:03:58.650382 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:58.660759 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:51018.service: Deactivated successfully. Sep 5 00:03:58.662578 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:03:58.667623 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:03:58.669664 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:51026.service - OpenSSH per-connection server daemon (10.0.0.1:51026). Sep 5 00:03:58.671483 systemd-logind[1480]: Removed session 16. Sep 5 00:03:58.728137 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 51026 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:03:58.729392 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:03:58.735525 systemd-logind[1480]: New session 17 of user core. Sep 5 00:03:58.740633 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:03:58.879317 sshd[5529]: Connection closed by 10.0.0.1 port 51026 Sep 5 00:03:58.879637 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Sep 5 00:03:58.883031 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:51026.service: Deactivated successfully. Sep 5 00:03:58.884756 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:03:58.887606 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:03:58.888826 systemd-logind[1480]: Removed session 17. Sep 5 00:04:03.891311 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:37424.service - OpenSSH per-connection server daemon (10.0.0.1:37424). Sep 5 00:04:03.956748 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 37424 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:04:03.958173 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:04:03.964941 systemd-logind[1480]: New session 18 of user core. Sep 5 00:04:03.974743 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:04:04.127331 sshd[5548]: Connection closed by 10.0.0.1 port 37424 Sep 5 00:04:04.127676 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Sep 5 00:04:04.131546 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:37424.service: Deactivated successfully. Sep 5 00:04:04.133791 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:04:04.134603 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:04:04.136710 systemd-logind[1480]: Removed session 18. Sep 5 00:04:05.332819 containerd[1498]: time="2025-09-05T00:04:05.332767877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39f95e5c9c513218454055be1f15000389f52eb98c30a38df41b97302f291c4d\" id:\"20cc55e9c4874cd654acdc8574184a55427f2555d717de9e3bbf76508b54c486\" pid:5571 exited_at:{seconds:1757030645 nanos:332457233}" Sep 5 00:04:09.146196 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:37440.service - OpenSSH per-connection server daemon (10.0.0.1:37440). Sep 5 00:04:09.223954 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 37440 ssh2: RSA SHA256:dz8a5vpzhl9T1tN+PlbA3wzUJkL1bHm+PkgBuWVD7dg Sep 5 00:04:09.227169 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:04:09.238082 systemd-logind[1480]: New session 19 of user core. Sep 5 00:04:09.255624 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:04:09.465103 sshd[5588]: Connection closed by 10.0.0.1 port 37440 Sep 5 00:04:09.465648 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Sep 5 00:04:09.470315 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:37440.service: Deactivated successfully. Sep 5 00:04:09.474065 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:04:09.475578 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:04:09.477658 systemd-logind[1480]: Removed session 19.