Sep 13 09:43:31.751078 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 09:43:31.751097 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sat Sep 13 08:30:48 -00 2025 Sep 13 09:43:31.751107 kernel: KASLR enabled Sep 13 09:43:31.751112 kernel: efi: EFI v2.7 by EDK II Sep 13 09:43:31.751117 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 13 09:43:31.751123 kernel: random: crng init done Sep 13 09:43:31.751130 kernel: secureboot: Secure boot disabled Sep 13 09:43:31.751135 kernel: ACPI: Early table checksum verification disabled Sep 13 09:43:31.751141 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 13 09:43:31.751148 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 09:43:31.751154 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751159 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751165 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751171 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751178 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751185 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751192 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751198 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751204 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 09:43:31.751210 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 09:43:31.751215 kernel: ACPI: Use ACPI SPCR as default console: No Sep 13 09:43:31.751222 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 09:43:31.751228 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 13 09:43:31.751233 kernel: Zone ranges: Sep 13 09:43:31.751239 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 09:43:31.751246 kernel: DMA32 empty Sep 13 09:43:31.751252 kernel: Normal empty Sep 13 09:43:31.751258 kernel: Device empty Sep 13 09:43:31.751264 kernel: Movable zone start for each node Sep 13 09:43:31.751270 kernel: Early memory node ranges Sep 13 09:43:31.751276 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 13 09:43:31.751282 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 13 09:43:31.751288 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 13 09:43:31.751300 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 13 09:43:31.751306 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 13 09:43:31.751312 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 13 09:43:31.751318 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 13 09:43:31.751325 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 13 09:43:31.751332 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 13 09:43:31.751338 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 13 09:43:31.751347 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 13 09:43:31.751353 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 13 09:43:31.751360 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 13 09:43:31.751368 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 09:43:31.751374 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 09:43:31.751381 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 13 09:43:31.751387 kernel: psci: probing for conduit method from ACPI. Sep 13 09:43:31.751393 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 09:43:31.751400 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 09:43:31.751406 kernel: psci: Trusted OS migration not required Sep 13 09:43:31.751413 kernel: psci: SMC Calling Convention v1.1 Sep 13 09:43:31.751420 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 09:43:31.751426 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 13 09:43:31.751434 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 13 09:43:31.751441 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 09:43:31.751448 kernel: Detected PIPT I-cache on CPU0 Sep 13 09:43:31.751454 kernel: CPU features: detected: GIC system register CPU interface Sep 13 09:43:31.751461 kernel: CPU features: detected: Spectre-v4 Sep 13 09:43:31.751467 kernel: CPU features: detected: Spectre-BHB Sep 13 09:43:31.751474 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 09:43:31.751480 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 09:43:31.751486 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 09:43:31.751493 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 09:43:31.751499 kernel: alternatives: applying boot alternatives Sep 13 09:43:31.751507 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=83877cdd7a467ed85aef41f446aeb722db891c2110b10250039f63b8f9619b03 Sep 13 09:43:31.751514 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 09:43:31.751521 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 09:43:31.751528 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 09:43:31.751534 kernel: Fallback order for Node 0: 0 Sep 13 09:43:31.751540 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 13 09:43:31.751547 kernel: Policy zone: DMA Sep 13 09:43:31.751553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 09:43:31.751559 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 13 09:43:31.751566 kernel: software IO TLB: area num 4. Sep 13 09:43:31.751572 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 13 09:43:31.751579 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 13 09:43:31.751586 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 09:43:31.751593 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 09:43:31.751599 kernel: rcu: RCU event tracing is enabled. Sep 13 09:43:31.751606 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 09:43:31.751612 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 09:43:31.751619 kernel: Tracing variant of Tasks RCU enabled. Sep 13 09:43:31.751625 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 09:43:31.751632 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 09:43:31.751638 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 09:43:31.751645 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 09:43:31.751651 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 09:43:31.751659 kernel: GICv3: 256 SPIs implemented Sep 13 09:43:31.751665 kernel: GICv3: 0 Extended SPIs implemented Sep 13 09:43:31.751672 kernel: Root IRQ handler: gic_handle_irq Sep 13 09:43:31.751678 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 09:43:31.751684 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 13 09:43:31.751690 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 09:43:31.751697 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 09:43:31.751703 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 13 09:43:31.751710 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 13 09:43:31.751717 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 13 09:43:31.751723 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 13 09:43:31.751729 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 09:43:31.751745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 09:43:31.751752 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 09:43:31.751759 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 09:43:31.751765 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 09:43:31.751772 kernel: arm-pv: using stolen time PV Sep 13 09:43:31.751778 kernel: Console: colour dummy device 80x25 Sep 13 09:43:31.751785 kernel: ACPI: Core revision 20240827 Sep 13 09:43:31.751792 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 09:43:31.751798 kernel: pid_max: default: 32768 minimum: 301 Sep 13 09:43:31.751805 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 09:43:31.751814 kernel: landlock: Up and running. Sep 13 09:43:31.751833 kernel: SELinux: Initializing. Sep 13 09:43:31.751840 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 09:43:31.751846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 09:43:31.751853 kernel: rcu: Hierarchical SRCU implementation. Sep 13 09:43:31.751859 kernel: rcu: Max phase no-delay instances is 400. Sep 13 09:43:31.751866 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 09:43:31.751890 kernel: Remapping and enabling EFI services. Sep 13 09:43:31.751897 kernel: smp: Bringing up secondary CPUs ... Sep 13 09:43:31.751910 kernel: Detected PIPT I-cache on CPU1 Sep 13 09:43:31.751917 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 09:43:31.751924 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 13 09:43:31.751931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 09:43:31.751938 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 09:43:31.751945 kernel: Detected PIPT I-cache on CPU2 Sep 13 09:43:31.751952 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 09:43:31.751959 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 13 09:43:31.751967 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 09:43:31.751974 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 09:43:31.751981 kernel: Detected PIPT I-cache on CPU3 Sep 13 09:43:31.751987 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 09:43:31.751994 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 13 09:43:31.752001 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 09:43:31.752008 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 09:43:31.752014 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 09:43:31.752021 kernel: SMP: Total of 4 processors activated. Sep 13 09:43:31.752029 kernel: CPU: All CPU(s) started at EL1 Sep 13 09:43:31.752036 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 09:43:31.752043 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 09:43:31.752050 kernel: CPU features: detected: Common not Private translations Sep 13 09:43:31.752057 kernel: CPU features: detected: CRC32 instructions Sep 13 09:43:31.752064 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 09:43:31.752071 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 09:43:31.752078 kernel: CPU features: detected: LSE atomic instructions Sep 13 09:43:31.752084 kernel: CPU features: detected: Privileged Access Never Sep 13 09:43:31.752093 kernel: CPU features: detected: RAS Extension Support Sep 13 09:43:31.752099 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 09:43:31.752106 kernel: alternatives: applying system-wide alternatives Sep 13 09:43:31.752113 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 13 09:43:31.752120 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 13 09:43:31.752127 kernel: devtmpfs: initialized Sep 13 09:43:31.752134 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 09:43:31.752141 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 09:43:31.752148 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 09:43:31.752156 kernel: 0 pages in range for non-PLT usage Sep 13 09:43:31.752163 kernel: 508560 pages in range for PLT usage Sep 13 09:43:31.752170 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 09:43:31.752177 kernel: SMBIOS 3.0.0 present. Sep 13 09:43:31.752183 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 13 09:43:31.752190 kernel: DMI: Memory slots populated: 1/1 Sep 13 09:43:31.752197 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 09:43:31.752204 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 09:43:31.752211 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 09:43:31.752219 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 09:43:31.752226 kernel: audit: initializing netlink subsys (disabled) Sep 13 09:43:31.752233 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 13 09:43:31.752240 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 09:43:31.752247 kernel: cpuidle: using governor menu Sep 13 09:43:31.752254 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 09:43:31.752260 kernel: ASID allocator initialised with 32768 entries Sep 13 09:43:31.752267 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 09:43:31.752274 kernel: Serial: AMBA PL011 UART driver Sep 13 09:43:31.752282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 09:43:31.752289 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 09:43:31.752295 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 09:43:31.752302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 09:43:31.752309 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 09:43:31.752316 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 09:43:31.752323 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 09:43:31.752330 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 09:43:31.752337 kernel: ACPI: Added _OSI(Module Device) Sep 13 09:43:31.752344 kernel: ACPI: Added _OSI(Processor Device) Sep 13 09:43:31.752351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 09:43:31.752358 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 09:43:31.752365 kernel: ACPI: Interpreter enabled Sep 13 09:43:31.752372 kernel: ACPI: Using GIC for interrupt routing Sep 13 09:43:31.752379 kernel: ACPI: MCFG table detected, 1 entries Sep 13 09:43:31.752385 kernel: ACPI: CPU0 has been hot-added Sep 13 09:43:31.752392 kernel: ACPI: CPU1 has been hot-added Sep 13 09:43:31.752399 kernel: ACPI: CPU2 has been hot-added Sep 13 09:43:31.752405 kernel: ACPI: CPU3 has been hot-added Sep 13 09:43:31.752413 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 09:43:31.752420 kernel: printk: legacy console [ttyAMA0] enabled Sep 13 09:43:31.752427 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 09:43:31.752549 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 09:43:31.752612 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 09:43:31.752669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 09:43:31.752725 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 09:43:31.752796 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 09:43:31.752805 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 09:43:31.752812 kernel: PCI host bridge to bus 0000:00 Sep 13 09:43:31.752910 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 09:43:31.752972 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 09:43:31.753024 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 09:43:31.753077 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 09:43:31.753162 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 13 09:43:31.753232 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 09:43:31.753292 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 13 09:43:31.753351 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 13 09:43:31.753410 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 09:43:31.753468 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 13 09:43:31.753526 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 13 09:43:31.753586 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 13 09:43:31.753640 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 09:43:31.753691 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 09:43:31.753753 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 09:43:31.753763 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 09:43:31.753770 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 09:43:31.753777 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 09:43:31.753786 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 09:43:31.753793 kernel: iommu: Default domain type: Translated Sep 13 09:43:31.753799 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 09:43:31.753806 kernel: efivars: Registered efivars operations Sep 13 09:43:31.753813 kernel: vgaarb: loaded Sep 13 09:43:31.753820 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 09:43:31.753827 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 09:43:31.753834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 09:43:31.753841 kernel: pnp: PnP ACPI init Sep 13 09:43:31.753919 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 09:43:31.753930 kernel: pnp: PnP ACPI: found 1 devices Sep 13 09:43:31.753937 kernel: NET: Registered PF_INET protocol family Sep 13 09:43:31.753944 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 09:43:31.753951 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 09:43:31.753958 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 09:43:31.753965 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 09:43:31.753972 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 09:43:31.753981 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 09:43:31.753988 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 09:43:31.753995 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 09:43:31.754002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 09:43:31.754008 kernel: PCI: CLS 0 bytes, default 64 Sep 13 09:43:31.754015 kernel: kvm [1]: HYP mode not available Sep 13 09:43:31.754022 kernel: Initialise system trusted keyrings Sep 13 09:43:31.754029 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 09:43:31.754035 kernel: Key type asymmetric registered Sep 13 09:43:31.754043 kernel: Asymmetric key parser 'x509' registered Sep 13 09:43:31.754051 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 09:43:31.754058 kernel: io scheduler mq-deadline registered Sep 13 09:43:31.754065 kernel: io scheduler kyber registered Sep 13 09:43:31.754071 kernel: io scheduler bfq registered Sep 13 09:43:31.754078 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 09:43:31.754085 kernel: ACPI: button: Power Button [PWRB] Sep 13 09:43:31.754093 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 09:43:31.754152 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 09:43:31.754164 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 09:43:31.754170 kernel: thunder_xcv, ver 1.0 Sep 13 09:43:31.754177 kernel: thunder_bgx, ver 1.0 Sep 13 09:43:31.754184 kernel: nicpf, ver 1.0 Sep 13 09:43:31.754191 kernel: nicvf, ver 1.0 Sep 13 09:43:31.754257 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 09:43:31.754314 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T09:43:31 UTC (1757756611) Sep 13 09:43:31.754323 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 09:43:31.754332 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 13 09:43:31.754339 kernel: watchdog: NMI not fully supported Sep 13 09:43:31.754346 kernel: watchdog: Hard watchdog permanently disabled Sep 13 09:43:31.754353 kernel: NET: Registered PF_INET6 protocol family Sep 13 09:43:31.754360 kernel: Segment Routing with IPv6 Sep 13 09:43:31.754367 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 09:43:31.754374 kernel: NET: Registered PF_PACKET protocol family Sep 13 09:43:31.754380 kernel: Key type dns_resolver registered Sep 13 09:43:31.754387 kernel: registered taskstats version 1 Sep 13 09:43:31.754394 kernel: Loading compiled-in X.509 certificates Sep 13 09:43:31.754402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: eb9aef096a79634fcad4856ba5f85bf33a87e4c7' Sep 13 09:43:31.754409 kernel: Demotion targets for Node 0: null Sep 13 09:43:31.754416 kernel: Key type .fscrypt registered Sep 13 09:43:31.754423 kernel: Key type fscrypt-provisioning registered Sep 13 09:43:31.754430 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 09:43:31.754436 kernel: ima: Allocated hash algorithm: sha1 Sep 13 09:43:31.754443 kernel: ima: No architecture policies found Sep 13 09:43:31.754450 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 09:43:31.754458 kernel: clk: Disabling unused clocks Sep 13 09:43:31.754465 kernel: PM: genpd: Disabling unused power domains Sep 13 09:43:31.754472 kernel: Warning: unable to open an initial console. Sep 13 09:43:31.754479 kernel: Freeing unused kernel memory: 38976K Sep 13 09:43:31.754486 kernel: Run /init as init process Sep 13 09:43:31.754493 kernel: with arguments: Sep 13 09:43:31.754500 kernel: /init Sep 13 09:43:31.754506 kernel: with environment: Sep 13 09:43:31.754513 kernel: HOME=/ Sep 13 09:43:31.754521 kernel: TERM=linux Sep 13 09:43:31.754528 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 09:43:31.754536 systemd[1]: Successfully made /usr/ read-only. Sep 13 09:43:31.754546 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 09:43:31.754554 systemd[1]: Detected virtualization kvm. Sep 13 09:43:31.754561 systemd[1]: Detected architecture arm64. Sep 13 09:43:31.754568 systemd[1]: Running in initrd. Sep 13 09:43:31.754575 systemd[1]: No hostname configured, using default hostname. Sep 13 09:43:31.754584 systemd[1]: Hostname set to . Sep 13 09:43:31.754591 systemd[1]: Initializing machine ID from VM UUID. Sep 13 09:43:31.754598 systemd[1]: Queued start job for default target initrd.target. Sep 13 09:43:31.754606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 09:43:31.754613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 09:43:31.754621 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 09:43:31.754629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 09:43:31.754636 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 09:43:31.754646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 09:43:31.754654 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 09:43:31.754661 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 09:43:31.754669 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 09:43:31.754676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 09:43:31.754684 systemd[1]: Reached target paths.target - Path Units. Sep 13 09:43:31.754692 systemd[1]: Reached target slices.target - Slice Units. Sep 13 09:43:31.754700 systemd[1]: Reached target swap.target - Swaps. Sep 13 09:43:31.754707 systemd[1]: Reached target timers.target - Timer Units. Sep 13 09:43:31.754714 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 09:43:31.754722 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 09:43:31.754729 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 09:43:31.754743 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 09:43:31.754752 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 09:43:31.754760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 09:43:31.754769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 09:43:31.754776 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 09:43:31.754784 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 09:43:31.754791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 09:43:31.754799 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 09:43:31.754807 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 09:43:31.754814 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 09:43:31.754821 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 09:43:31.754829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 09:43:31.754838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 09:43:31.754845 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 09:43:31.754853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 09:43:31.754860 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 09:43:31.754869 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 09:43:31.754899 systemd-journald[244]: Collecting audit messages is disabled. Sep 13 09:43:31.754917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 09:43:31.754926 systemd-journald[244]: Journal started Sep 13 09:43:31.754945 systemd-journald[244]: Runtime Journal (/run/log/journal/98e192d844b2486298f4a7442f68229d) is 6M, max 48.5M, 42.4M free. Sep 13 09:43:31.763211 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 09:43:31.763235 kernel: Bridge firewalling registered Sep 13 09:43:31.748833 systemd-modules-load[245]: Inserted module 'overlay' Sep 13 09:43:31.762178 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 13 09:43:31.767068 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 09:43:31.767086 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 09:43:31.768097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 09:43:31.769937 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 09:43:31.772694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 09:43:31.774208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 09:43:31.775732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 09:43:31.783834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 09:43:31.785916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 09:43:31.786291 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 09:43:31.788157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 09:43:31.789218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 09:43:31.791915 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 09:43:31.794601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 09:43:31.810367 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=83877cdd7a467ed85aef41f446aeb722db891c2110b10250039f63b8f9619b03 Sep 13 09:43:31.823518 systemd-resolved[290]: Positive Trust Anchors: Sep 13 09:43:31.823537 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 09:43:31.823567 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 09:43:31.828192 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 13 09:43:31.829077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 09:43:31.831106 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 09:43:31.881895 kernel: SCSI subsystem initialized Sep 13 09:43:31.886889 kernel: Loading iSCSI transport class v2.0-870. Sep 13 09:43:31.893898 kernel: iscsi: registered transport (tcp) Sep 13 09:43:31.906909 kernel: iscsi: registered transport (qla4xxx) Sep 13 09:43:31.906942 kernel: QLogic iSCSI HBA Driver Sep 13 09:43:31.922000 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 09:43:31.941928 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 09:43:31.943139 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 09:43:31.986838 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 09:43:31.988759 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 09:43:32.045897 kernel: raid6: neonx8 gen() 15777 MB/s Sep 13 09:43:32.062887 kernel: raid6: neonx4 gen() 15815 MB/s Sep 13 09:43:32.079886 kernel: raid6: neonx2 gen() 13209 MB/s Sep 13 09:43:32.096893 kernel: raid6: neonx1 gen() 10442 MB/s Sep 13 09:43:32.113892 kernel: raid6: int64x8 gen() 6905 MB/s Sep 13 09:43:32.130893 kernel: raid6: int64x4 gen() 7330 MB/s Sep 13 09:43:32.147900 kernel: raid6: int64x2 gen() 6098 MB/s Sep 13 09:43:32.164897 kernel: raid6: int64x1 gen() 5058 MB/s Sep 13 09:43:32.164935 kernel: raid6: using algorithm neonx4 gen() 15815 MB/s Sep 13 09:43:32.181897 kernel: raid6: .... xor() 12350 MB/s, rmw enabled Sep 13 09:43:32.181912 kernel: raid6: using neon recovery algorithm Sep 13 09:43:32.186949 kernel: xor: measuring software checksum speed Sep 13 09:43:32.186968 kernel: 8regs : 21613 MB/sec Sep 13 09:43:32.187986 kernel: 32regs : 21693 MB/sec Sep 13 09:43:32.188012 kernel: arm64_neon : 26030 MB/sec Sep 13 09:43:32.188029 kernel: xor: using function: arm64_neon (26030 MB/sec) Sep 13 09:43:32.239911 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 09:43:32.246179 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 09:43:32.248348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 09:43:32.276068 systemd-udevd[499]: Using default interface naming scheme 'v255'. Sep 13 09:43:32.280088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 09:43:32.281704 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 09:43:32.314721 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 13 09:43:32.336715 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 09:43:32.338858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 09:43:32.389068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 09:43:32.392029 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 09:43:32.441038 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 13 09:43:32.441193 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 09:43:32.449334 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 09:43:32.449378 kernel: GPT:9289727 != 19775487 Sep 13 09:43:32.449388 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 09:43:32.450233 kernel: GPT:9289727 != 19775487 Sep 13 09:43:32.450249 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 09:43:32.450882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 09:43:32.453230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 09:43:32.453939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 09:43:32.460862 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 09:43:32.462384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 09:43:32.490907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 09:43:32.492011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 09:43:32.493831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 09:43:32.502018 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 09:43:32.513558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 09:43:32.519344 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 09:43:32.520280 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 09:43:32.522675 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 09:43:32.524508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 09:43:32.526143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 09:43:32.528355 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 09:43:32.529839 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 09:43:32.545362 disk-uuid[591]: Primary Header is updated. Sep 13 09:43:32.545362 disk-uuid[591]: Secondary Entries is updated. Sep 13 09:43:32.545362 disk-uuid[591]: Secondary Header is updated. Sep 13 09:43:32.549886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 09:43:32.549990 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 09:43:33.558382 disk-uuid[594]: The operation has completed successfully. Sep 13 09:43:33.560072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 09:43:33.583885 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 09:43:33.583977 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 09:43:33.611074 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 09:43:33.639467 sh[612]: Success Sep 13 09:43:33.650897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 09:43:33.650933 kernel: device-mapper: uevent: version 1.0.3 Sep 13 09:43:33.651900 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 09:43:33.659914 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 13 09:43:33.684481 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 09:43:33.687380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 09:43:33.700401 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 09:43:33.708218 kernel: BTRFS: device fsid ce611416-d4ae-4fd5-95f5-29fc4047816a devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (624) Sep 13 09:43:33.708247 kernel: BTRFS info (device dm-0): first mount of filesystem ce611416-d4ae-4fd5-95f5-29fc4047816a Sep 13 09:43:33.708257 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 09:43:33.713899 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 09:43:33.713936 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 09:43:33.714411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 09:43:33.715407 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 09:43:33.716460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 09:43:33.717110 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 09:43:33.718304 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 09:43:33.734913 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 13 09:43:33.734954 kernel: BTRFS info (device vda6): first mount of filesystem d8c2080e-82ac-48ea-8084-7c29a2009dad Sep 13 09:43:33.734965 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 09:43:33.738253 kernel: BTRFS info (device vda6): turning on async discard Sep 13 09:43:33.738291 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 09:43:33.741898 kernel: BTRFS info (device vda6): last unmount of filesystem d8c2080e-82ac-48ea-8084-7c29a2009dad Sep 13 09:43:33.742302 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 09:43:33.744213 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 09:43:33.803917 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 09:43:33.806322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 09:43:33.840127 ignition[700]: Ignition 2.22.0 Sep 13 09:43:33.840143 ignition[700]: Stage: fetch-offline Sep 13 09:43:33.840173 ignition[700]: no configs at "/usr/lib/ignition/base.d" Sep 13 09:43:33.841063 systemd-networkd[801]: lo: Link UP Sep 13 09:43:33.840181 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 09:43:33.841067 systemd-networkd[801]: lo: Gained carrier Sep 13 09:43:33.840255 ignition[700]: parsed url from cmdline: "" Sep 13 09:43:33.841700 systemd-networkd[801]: Enumeration completed Sep 13 09:43:33.840258 ignition[700]: no config URL provided Sep 13 09:43:33.842211 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 09:43:33.840262 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 09:43:33.842215 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 09:43:33.840269 ignition[700]: no config at "/usr/lib/ignition/user.ign" Sep 13 09:43:33.842851 systemd-networkd[801]: eth0: Link UP Sep 13 09:43:33.840287 ignition[700]: op(1): [started] loading QEMU firmware config module Sep 13 09:43:33.843016 systemd-networkd[801]: eth0: Gained carrier Sep 13 09:43:33.840291 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 09:43:33.843025 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 09:43:33.851944 ignition[700]: op(1): [finished] loading QEMU firmware config module Sep 13 09:43:33.843420 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 09:43:33.851961 ignition[700]: QEMU firmware config was not found. Ignoring... Sep 13 09:43:33.844619 systemd[1]: Reached target network.target - Network. Sep 13 09:43:33.866930 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 09:43:33.900257 ignition[700]: parsing config with SHA512: 1fe5a6698b009f879843bba12d036757249c44c2931963a49dd696103e2a91f41be5798ce3d97a4ed1ce3467190dda584c0d585dc78ecae5d05ef8e9a5c5ba10 Sep 13 09:43:33.905400 unknown[700]: fetched base config from "system" Sep 13 09:43:33.905417 unknown[700]: fetched user config from "qemu" Sep 13 09:43:33.905754 ignition[700]: fetch-offline: fetch-offline passed Sep 13 09:43:33.905811 ignition[700]: Ignition finished successfully Sep 13 09:43:33.908179 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 09:43:33.909283 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 09:43:33.909944 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 09:43:33.939203 ignition[809]: Ignition 2.22.0 Sep 13 09:43:33.939219 ignition[809]: Stage: kargs Sep 13 09:43:33.939342 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 13 09:43:33.939351 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 09:43:33.940054 ignition[809]: kargs: kargs passed Sep 13 09:43:33.940096 ignition[809]: Ignition finished successfully Sep 13 09:43:33.942624 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 09:43:33.944224 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 09:43:33.972561 ignition[817]: Ignition 2.22.0 Sep 13 09:43:33.972578 ignition[817]: Stage: disks Sep 13 09:43:33.972699 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 13 09:43:33.975474 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 09:43:33.972708 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 09:43:33.976344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 09:43:33.973426 ignition[817]: disks: disks passed Sep 13 09:43:33.977684 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 09:43:33.973467 ignition[817]: Ignition finished successfully Sep 13 09:43:33.979494 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 09:43:33.980915 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 09:43:33.982042 systemd[1]: Reached target basic.target - Basic System. Sep 13 09:43:33.984244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 09:43:34.008439 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 09:43:34.014199 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 09:43:34.015822 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 09:43:34.082757 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 09:43:34.083931 kernel: EXT4-fs (vda9): mounted filesystem d062d616-d3c8-4433-a4fa-c45b2fea0712 r/w with ordered data mode. Quota mode: none. Sep 13 09:43:34.083783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 09:43:34.086185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 09:43:34.087984 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 09:43:34.088787 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 09:43:34.088825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 09:43:34.088845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 09:43:34.102670 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 09:43:34.105898 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Sep 13 09:43:34.105928 kernel: BTRFS info (device vda6): first mount of filesystem d8c2080e-82ac-48ea-8084-7c29a2009dad Sep 13 09:43:34.106298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 09:43:34.108979 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 09:43:34.113056 kernel: BTRFS info (device vda6): turning on async discard Sep 13 09:43:34.113086 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 09:43:34.113892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 09:43:34.139866 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 09:43:34.142808 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Sep 13 09:43:34.145598 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 09:43:34.149081 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 09:43:34.208481 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 09:43:34.210377 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 09:43:34.211629 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 09:43:34.225883 kernel: BTRFS info (device vda6): last unmount of filesystem d8c2080e-82ac-48ea-8084-7c29a2009dad Sep 13 09:43:34.239927 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 09:43:34.253233 ignition[948]: INFO : Ignition 2.22.0 Sep 13 09:43:34.253233 ignition[948]: INFO : Stage: mount Sep 13 09:43:34.254483 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 09:43:34.254483 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 09:43:34.254483 ignition[948]: INFO : mount: mount passed Sep 13 09:43:34.254483 ignition[948]: INFO : Ignition finished successfully Sep 13 09:43:34.256178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 09:43:34.258016 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 09:43:34.820785 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 09:43:34.822197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 09:43:34.841914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (961) Sep 13 09:43:34.843588 kernel: BTRFS info (device vda6): first mount of filesystem d8c2080e-82ac-48ea-8084-7c29a2009dad Sep 13 09:43:34.843611 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 09:43:34.845890 kernel: BTRFS info (device vda6): turning on async discard Sep 13 09:43:34.845918 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 09:43:34.847390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 09:43:34.882365 ignition[978]: INFO : Ignition 2.22.0 Sep 13 09:43:34.882365 ignition[978]: INFO : Stage: files Sep 13 09:43:34.883627 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 09:43:34.883627 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 09:43:34.883627 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Sep 13 09:43:34.886453 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 09:43:34.886453 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 09:43:34.886453 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 09:43:34.889851 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 09:43:34.889851 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 09:43:34.889851 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 09:43:34.889851 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 13 09:43:34.886954 unknown[978]: wrote ssh authorized keys file for user: core Sep 13 09:43:34.941686 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 09:43:35.228932 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 09:43:35.241356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 09:43:35.241356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 09:43:35.241356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 09:43:35.241356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 09:43:35.248038 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 09:43:35.248038 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 09:43:35.248038 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 09:43:35.666062 systemd-networkd[801]: eth0: Gained IPv6LL Sep 13 09:43:36.072267 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 09:43:36.843770 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 09:43:36.843770 ignition[978]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 09:43:36.846566 ignition[978]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 09:43:36.850239 ignition[978]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 09:43:36.863812 ignition[978]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 09:43:36.868209 ignition[978]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 09:43:36.870684 ignition[978]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 09:43:36.870684 ignition[978]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 09:43:36.870684 ignition[978]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 09:43:36.870684 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 09:43:36.870684 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 09:43:36.870684 ignition[978]: INFO : files: files passed Sep 13 09:43:36.870684 ignition[978]: INFO : Ignition finished successfully Sep 13 09:43:36.872986 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 09:43:36.875404 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 09:43:36.876863 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 09:43:36.890703 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 09:43:36.891532 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 09:43:36.893705 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 09:43:36.894819 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 09:43:36.894819 initrd-setup-root-after-ignition[1009]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 09:43:36.897229 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 09:43:36.898081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 09:43:36.899325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 09:43:36.901335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 09:43:36.945630 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 09:43:36.945770 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 09:43:36.947514 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 09:43:36.948817 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 09:43:36.950268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 09:43:36.950981 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 09:43:36.980705 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 09:43:36.982820 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 09:43:37.002654 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 09:43:37.003674 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 09:43:37.005280 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 09:43:37.006643 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 09:43:37.006768 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 09:43:37.008629 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 09:43:37.010232 systemd[1]: Stopped target basic.target - Basic System. Sep 13 09:43:37.011512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 09:43:37.012786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 09:43:37.014321 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 09:43:37.015747 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 09:43:37.017475 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 09:43:37.018920 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 09:43:37.020767 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 09:43:37.022342 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 09:43:37.023625 systemd[1]: Stopped target swap.target - Swaps. Sep 13 09:43:37.024770 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 09:43:37.024905 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 09:43:37.026663 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 09:43:37.028129 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 09:43:37.029619 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 09:43:37.032965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 09:43:37.033909 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 09:43:37.034023 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 09:43:37.036291 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 09:43:37.036405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 09:43:37.037858 systemd[1]: Stopped target paths.target - Path Units. Sep 13 09:43:37.039069 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 09:43:37.042929 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 09:43:37.043882 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 09:43:37.045620 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 09:43:37.046790 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 09:43:37.046869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 09:43:37.048058 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 09:43:37.048128 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 09:43:37.049310 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 09:43:37.049413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 09:43:37.050793 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 09:43:37.050902 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 09:43:37.052794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 09:43:37.054009 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 09:43:37.054123 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 09:43:37.056653 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 09:43:37.057640 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 09:43:37.057781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 09:43:37.059232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 09:43:37.059325 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 09:43:37.065355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 09:43:37.065449 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 09:43:37.071824 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 09:43:37.081122 ignition[1034]: INFO : Ignition 2.22.0 Sep 13 09:43:37.081938 ignition[1034]: INFO : Stage: umount Sep 13 09:43:37.081938 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 09:43:37.081938 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 09:43:37.085076 ignition[1034]: INFO : umount: umount passed Sep 13 09:43:37.085076 ignition[1034]: INFO : Ignition finished successfully Sep 13 09:43:37.084384 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 09:43:37.084475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 09:43:37.085864 systemd[1]: Stopped target network.target - Network. Sep 13 09:43:37.086845 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 09:43:37.088961 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 09:43:37.089711 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 09:43:37.089765 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 09:43:37.091068 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 09:43:37.091113 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 09:43:37.092351 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 09:43:37.092387 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 09:43:37.093946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 09:43:37.098472 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 09:43:37.109135 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 09:43:37.109264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 09:43:37.112407 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 09:43:37.112656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 09:43:37.112692 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 09:43:37.115533 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 09:43:37.118256 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 09:43:37.118361 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 09:43:37.122351 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 09:43:37.122489 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 09:43:37.124678 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 09:43:37.124709 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 09:43:37.126898 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 09:43:37.128364 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 09:43:37.128418 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 09:43:37.129837 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 09:43:37.129885 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 09:43:37.134778 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 09:43:37.134820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 09:43:37.135906 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 09:43:37.139630 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 09:43:37.163692 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 09:43:37.163850 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 09:43:37.166065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 09:43:37.166101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 09:43:37.168350 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 09:43:37.168385 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 09:43:37.170119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 09:43:37.170164 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 09:43:37.172949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 09:43:37.172996 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 09:43:37.175477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 09:43:37.175519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 09:43:37.185154 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 09:43:37.186053 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 09:43:37.186109 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 09:43:37.188670 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 09:43:37.188709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 09:43:37.190784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 09:43:37.190828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 09:43:37.194207 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 09:43:37.195969 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 09:43:37.197273 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 09:43:37.197342 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 09:43:37.199007 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 09:43:37.199085 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 09:43:37.200610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 09:43:37.201310 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 09:43:37.203110 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 09:43:37.205179 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 09:43:37.222882 systemd[1]: Switching root. Sep 13 09:43:37.251579 systemd-journald[244]: Journal stopped Sep 13 09:43:38.051211 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 13 09:43:38.051257 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 09:43:38.051274 kernel: SELinux: policy capability open_perms=1 Sep 13 09:43:38.051287 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 09:43:38.051297 kernel: SELinux: policy capability always_check_network=0 Sep 13 09:43:38.051307 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 09:43:38.051316 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 09:43:38.051327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 09:43:38.051336 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 09:43:38.051346 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 09:43:38.051356 systemd[1]: Successfully loaded SELinux policy in 62.402ms. Sep 13 09:43:38.051374 kernel: audit: type=1403 audit(1757756617.482:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 09:43:38.051388 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.441ms. Sep 13 09:43:38.051399 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 09:43:38.051410 systemd[1]: Detected virtualization kvm. Sep 13 09:43:38.051419 systemd[1]: Detected architecture arm64. Sep 13 09:43:38.051430 systemd[1]: Detected first boot. Sep 13 09:43:38.051443 systemd[1]: Initializing machine ID from VM UUID. Sep 13 09:43:38.051453 zram_generator::config[1080]: No configuration found. Sep 13 09:43:38.051464 kernel: NET: Registered PF_VSOCK protocol family Sep 13 09:43:38.051474 systemd[1]: Populated /etc with preset unit settings. Sep 13 09:43:38.051485 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 09:43:38.051495 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 09:43:38.051504 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 09:43:38.051518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 09:43:38.051528 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 09:43:38.051538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 09:43:38.051548 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 09:43:38.051557 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 09:43:38.051567 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 09:43:38.051577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 09:43:38.051587 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 09:43:38.051597 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 09:43:38.051608 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 09:43:38.051619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 09:43:38.051629 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 09:43:38.051639 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 09:43:38.051648 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 09:43:38.051659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 09:43:38.051669 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 09:43:38.051679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 09:43:38.051690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 09:43:38.051700 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 09:43:38.051710 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 09:43:38.051733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 09:43:38.051743 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 09:43:38.051754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 09:43:38.051764 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 09:43:38.051775 systemd[1]: Reached target slices.target - Slice Units. Sep 13 09:43:38.051785 systemd[1]: Reached target swap.target - Swaps. Sep 13 09:43:38.051796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 09:43:38.051806 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 09:43:38.051816 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 09:43:38.051825 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 09:43:38.051835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 09:43:38.051845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 09:43:38.051855 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 09:43:38.051865 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 09:43:38.051932 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 09:43:38.051947 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 09:43:38.051958 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 09:43:38.051968 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 09:43:38.051978 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 09:43:38.051988 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 09:43:38.051998 systemd[1]: Reached target machines.target - Containers. Sep 13 09:43:38.052008 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 09:43:38.052018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 09:43:38.052031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 09:43:38.052041 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 09:43:38.052051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 09:43:38.052062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 09:43:38.052073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 09:43:38.052084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 09:43:38.052094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 09:43:38.052104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 09:43:38.052116 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 09:43:38.052127 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 09:43:38.052138 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 09:43:38.052148 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 09:43:38.052159 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 09:43:38.052170 kernel: loop: module loaded Sep 13 09:43:38.052179 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 09:43:38.052189 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 09:43:38.052198 kernel: fuse: init (API version 7.41) Sep 13 09:43:38.052209 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 09:43:38.052219 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 09:43:38.052230 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 09:43:38.052240 kernel: ACPI: bus type drm_connector registered Sep 13 09:43:38.052249 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 09:43:38.052260 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 09:43:38.052270 systemd[1]: Stopped verity-setup.service. Sep 13 09:43:38.052280 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 09:43:38.052289 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 09:43:38.052299 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 09:43:38.052331 systemd-journald[1148]: Collecting audit messages is disabled. Sep 13 09:43:38.052353 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 09:43:38.052364 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 09:43:38.052376 systemd-journald[1148]: Journal started Sep 13 09:43:38.052396 systemd-journald[1148]: Runtime Journal (/run/log/journal/98e192d844b2486298f4a7442f68229d) is 6M, max 48.5M, 42.4M free. Sep 13 09:43:37.850951 systemd[1]: Queued start job for default target multi-user.target. Sep 13 09:43:37.869916 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 09:43:37.871125 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 09:43:38.054945 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 09:43:38.055532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 09:43:38.057889 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 09:43:38.059005 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 09:43:38.060160 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 09:43:38.060311 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 09:43:38.061461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 09:43:38.061647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 09:43:38.062785 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 09:43:38.062977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 09:43:38.064091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 09:43:38.064240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 09:43:38.065408 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 09:43:38.065567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 09:43:38.066637 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 09:43:38.066788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 09:43:38.068165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 09:43:38.070288 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 09:43:38.071515 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 09:43:38.072828 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 09:43:38.084199 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 09:43:38.086226 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 09:43:38.087999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 09:43:38.088866 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 09:43:38.088958 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 09:43:38.090551 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 09:43:38.101616 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 09:43:38.102617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 09:43:38.103684 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 09:43:38.105425 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 09:43:38.107152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 09:43:38.108135 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 09:43:38.110967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 09:43:38.111853 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 09:43:38.114987 systemd-journald[1148]: Time spent on flushing to /var/log/journal/98e192d844b2486298f4a7442f68229d is 12.199ms for 884 entries. Sep 13 09:43:38.114987 systemd-journald[1148]: System Journal (/var/log/journal/98e192d844b2486298f4a7442f68229d) is 8M, max 195.6M, 187.6M free. Sep 13 09:43:38.137442 systemd-journald[1148]: Received client request to flush runtime journal. Sep 13 09:43:38.137475 kernel: loop0: detected capacity change from 0 to 100632 Sep 13 09:43:38.113568 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 09:43:38.116148 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 09:43:38.119859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 09:43:38.121442 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 09:43:38.122724 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 09:43:38.125327 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 09:43:38.129522 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 09:43:38.133075 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 09:43:38.144340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 09:43:38.147566 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 09:43:38.150092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 09:43:38.169397 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 09:43:38.173573 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 09:43:38.175895 kernel: loop1: detected capacity change from 0 to 211168 Sep 13 09:43:38.178071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 09:43:38.205915 kernel: loop2: detected capacity change from 0 to 119368 Sep 13 09:43:38.212379 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Sep 13 09:43:38.212395 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Sep 13 09:43:38.216007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 09:43:38.237730 kernel: loop3: detected capacity change from 0 to 100632 Sep 13 09:43:38.242889 kernel: loop4: detected capacity change from 0 to 211168 Sep 13 09:43:38.248895 kernel: loop5: detected capacity change from 0 to 119368 Sep 13 09:43:38.252037 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 09:43:38.252409 (sd-merge)[1219]: Merged extensions into '/usr'. Sep 13 09:43:38.255623 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 09:43:38.255642 systemd[1]: Reloading... Sep 13 09:43:38.315753 zram_generator::config[1248]: No configuration found. Sep 13 09:43:38.421500 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 09:43:38.458122 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 09:43:38.458416 systemd[1]: Reloading finished in 202 ms. Sep 13 09:43:38.476391 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 09:43:38.477644 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 09:43:38.488140 systemd[1]: Starting ensure-sysext.service... Sep 13 09:43:38.489788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 09:43:38.498482 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Sep 13 09:43:38.498499 systemd[1]: Reloading... Sep 13 09:43:38.508204 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 09:43:38.508229 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 09:43:38.508452 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 09:43:38.508632 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 09:43:38.509629 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 09:43:38.510025 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 13 09:43:38.510150 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 13 09:43:38.512418 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 09:43:38.512517 systemd-tmpfiles[1283]: Skipping /boot Sep 13 09:43:38.518304 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 09:43:38.518389 systemd-tmpfiles[1283]: Skipping /boot Sep 13 09:43:38.542926 zram_generator::config[1310]: No configuration found. Sep 13 09:43:38.672215 systemd[1]: Reloading finished in 173 ms. Sep 13 09:43:38.697373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 09:43:38.703945 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 09:43:38.714786 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 09:43:38.716972 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 09:43:38.718792 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 09:43:38.721245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 09:43:38.725014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 09:43:38.728019 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 09:43:38.734240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 09:43:38.735562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 09:43:38.738027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 09:43:38.740782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 09:43:38.742068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 09:43:38.742194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 09:43:38.751836 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 09:43:38.754058 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 09:43:38.756016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 09:43:38.756173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 09:43:38.757518 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 13 09:43:38.758523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 09:43:38.758660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 09:43:38.762303 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 09:43:38.762458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 09:43:38.768696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 09:43:38.771135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 09:43:38.773671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 09:43:38.775968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 09:43:38.777046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 09:43:38.777204 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 09:43:38.778890 augenrules[1380]: No rules Sep 13 09:43:38.779154 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 09:43:38.781866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 09:43:38.783511 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 09:43:38.789075 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 09:43:38.790609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 09:43:38.793075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 09:43:38.793493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 09:43:38.795270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 09:43:38.795423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 09:43:38.797188 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 09:43:38.797331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 09:43:38.798772 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 09:43:38.803382 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 09:43:38.809962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 09:43:38.816986 systemd[1]: Finished ensure-sysext.service. Sep 13 09:43:38.819533 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 09:43:38.820516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 09:43:38.821413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 09:43:38.834034 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 09:43:38.835774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 09:43:38.837796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 09:43:38.838864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 09:43:38.839110 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 09:43:38.841493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 09:43:38.844815 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 09:43:38.845849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 09:43:38.846525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 09:43:38.846736 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 09:43:38.850282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 09:43:38.850455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 09:43:38.852324 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 09:43:38.852482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 09:43:38.859656 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 09:43:38.859853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 09:43:38.863557 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 09:43:38.863617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 09:43:38.863666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 09:43:38.876621 augenrules[1427]: /sbin/augenrules: No change Sep 13 09:43:38.889399 augenrules[1459]: No rules Sep 13 09:43:38.891021 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 09:43:38.891265 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 09:43:38.939234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 09:43:38.941298 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 09:43:38.967559 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 09:43:38.976476 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 09:43:38.978360 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 09:43:38.983796 systemd-resolved[1349]: Positive Trust Anchors: Sep 13 09:43:38.983816 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 09:43:38.983846 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 09:43:38.989486 systemd-networkd[1435]: lo: Link UP Sep 13 09:43:38.989497 systemd-networkd[1435]: lo: Gained carrier Sep 13 09:43:38.990397 systemd-networkd[1435]: Enumeration completed Sep 13 09:43:38.990501 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 09:43:38.990810 systemd-resolved[1349]: Defaulting to hostname 'linux'. Sep 13 09:43:38.992078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 09:43:38.993093 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 09:43:38.993104 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 09:43:38.993603 systemd[1]: Reached target network.target - Network. Sep 13 09:43:38.994757 systemd-networkd[1435]: eth0: Link UP Sep 13 09:43:38.994892 systemd-networkd[1435]: eth0: Gained carrier Sep 13 09:43:38.994908 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 09:43:38.995197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 09:43:38.996280 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 09:43:38.998017 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 09:43:38.998998 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 09:43:39.001153 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 09:43:39.002044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 09:43:39.003001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 09:43:39.004157 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 09:43:39.004187 systemd[1]: Reached target paths.target - Path Units. Sep 13 09:43:39.005470 systemd[1]: Reached target timers.target - Timer Units. Sep 13 09:43:39.007364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 09:43:39.010214 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 09:43:39.014284 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 09:43:39.016727 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 09:43:39.018797 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 09:43:39.025456 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 09:43:39.027289 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 09:43:39.030171 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 09:43:39.030776 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Sep 13 09:43:39.031407 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 09:43:38.552262 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 09:43:38.556569 systemd-journald[1148]: Time jumped backwards, rotating. Sep 13 09:43:38.552492 systemd-resolved[1349]: Clock change detected. Flushing caches. Sep 13 09:43:38.552779 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 09:43:38.553083 systemd-timesyncd[1438]: Initial clock synchronization to Sat 2025-09-13 09:43:38.552162 UTC. Sep 13 09:43:38.555270 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 09:43:38.563133 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 09:43:38.564036 systemd[1]: Reached target basic.target - Basic System. Sep 13 09:43:38.564901 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 09:43:38.564990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 09:43:38.568717 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 09:43:38.570727 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 09:43:38.572834 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 09:43:38.574741 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 09:43:38.577055 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 09:43:38.578130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 09:43:38.580394 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 09:43:38.582135 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 09:43:38.586432 jq[1497]: false Sep 13 09:43:38.587833 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 09:43:38.590573 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 09:43:38.591695 extend-filesystems[1498]: Found /dev/vda6 Sep 13 09:43:38.593654 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 09:43:38.596842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 09:43:38.598724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 09:43:38.599026 extend-filesystems[1498]: Found /dev/vda9 Sep 13 09:43:38.599280 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 09:43:38.599905 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 09:43:38.602751 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 09:43:38.603027 extend-filesystems[1498]: Checking size of /dev/vda9 Sep 13 09:43:38.611871 extend-filesystems[1498]: Resized partition /dev/vda9 Sep 13 09:43:38.614789 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 09:43:38.614866 extend-filesystems[1524]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 09:43:38.620402 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 09:43:38.618153 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 09:43:38.620852 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 09:43:38.621029 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 09:43:38.623674 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 09:43:38.623870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 09:43:38.626624 jq[1516]: true Sep 13 09:43:38.637313 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 09:43:38.637689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 09:43:38.659710 update_engine[1515]: I20250913 09:43:38.659389 1515 main.cc:92] Flatcar Update Engine starting Sep 13 09:43:38.661176 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 09:43:38.666795 tar[1527]: linux-arm64/LICENSE Sep 13 09:43:38.666795 tar[1527]: linux-arm64/helm Sep 13 09:43:38.668020 jq[1529]: true Sep 13 09:43:38.678588 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 09:43:38.691798 dbus-daemon[1495]: [system] SELinux support is enabled Sep 13 09:43:38.692001 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 09:43:38.695603 extend-filesystems[1524]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 09:43:38.695603 extend-filesystems[1524]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 09:43:38.695603 extend-filesystems[1524]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 09:43:38.708068 update_engine[1515]: I20250913 09:43:38.695013 1515 update_check_scheduler.cc:74] Next update check in 9m57s Sep 13 09:43:38.695667 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 09:43:38.708160 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Sep 13 09:43:38.695691 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 09:43:38.697542 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 09:43:38.697567 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 09:43:38.703766 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 09:43:38.704361 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 09:43:38.712962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 09:43:38.715750 systemd[1]: Started update-engine.service - Update Engine. Sep 13 09:43:38.719265 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 09:43:38.727152 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Sep 13 09:43:38.727365 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 09:43:38.730046 systemd-logind[1509]: New seat seat0. Sep 13 09:43:38.730752 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 09:43:38.731867 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 09:43:38.734814 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 09:43:38.771717 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 09:43:38.828581 containerd[1530]: time="2025-09-13T09:43:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 09:43:38.829994 containerd[1530]: time="2025-09-13T09:43:38.829957757Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 09:43:38.846448 containerd[1530]: time="2025-09-13T09:43:38.846373157Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.48µs" Sep 13 09:43:38.846448 containerd[1530]: time="2025-09-13T09:43:38.846423717Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 09:43:38.846448 containerd[1530]: time="2025-09-13T09:43:38.846442597Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.846768917Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.846798557Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.846825997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.846941117Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.846956277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.847287077Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.847305877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.847395997Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.847460797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.847567997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848579 containerd[1530]: time="2025-09-13T09:43:38.847873757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848794 containerd[1530]: time="2025-09-13T09:43:38.847907957Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 09:43:38.848794 containerd[1530]: time="2025-09-13T09:43:38.847919877Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 09:43:38.848794 containerd[1530]: time="2025-09-13T09:43:38.847969077Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 09:43:38.848794 containerd[1530]: time="2025-09-13T09:43:38.848409717Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 09:43:38.848794 containerd[1530]: time="2025-09-13T09:43:38.848492117Z" level=info msg="metadata content store policy set" policy=shared Sep 13 09:43:38.851791 containerd[1530]: time="2025-09-13T09:43:38.851754677Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 09:43:38.851847 containerd[1530]: time="2025-09-13T09:43:38.851828717Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 09:43:38.852064 containerd[1530]: time="2025-09-13T09:43:38.852038677Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 09:43:38.852094 containerd[1530]: time="2025-09-13T09:43:38.852079837Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 09:43:38.852123 containerd[1530]: time="2025-09-13T09:43:38.852100517Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 09:43:38.852156 containerd[1530]: time="2025-09-13T09:43:38.852142797Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853724637Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853776797Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853792677Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853808557Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853822357Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853840957Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853969797Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 09:43:38.854022 containerd[1530]: time="2025-09-13T09:43:38.853991597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 09:43:38.854204 containerd[1530]: time="2025-09-13T09:43:38.854156237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 09:43:38.854204 containerd[1530]: time="2025-09-13T09:43:38.854180957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 09:43:38.854282 containerd[1530]: time="2025-09-13T09:43:38.854258357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 09:43:38.854310 containerd[1530]: time="2025-09-13T09:43:38.854283957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 09:43:38.854357 containerd[1530]: time="2025-09-13T09:43:38.854301157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 09:43:38.854390 containerd[1530]: time="2025-09-13T09:43:38.854365677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 09:43:38.854417 containerd[1530]: time="2025-09-13T09:43:38.854393837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 09:43:38.854417 containerd[1530]: time="2025-09-13T09:43:38.854408117Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 09:43:38.854452 containerd[1530]: time="2025-09-13T09:43:38.854424757Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 09:43:38.855063 containerd[1530]: time="2025-09-13T09:43:38.854928077Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 09:43:38.855480 containerd[1530]: time="2025-09-13T09:43:38.855076637Z" level=info msg="Start snapshots syncer" Sep 13 09:43:38.855480 containerd[1530]: time="2025-09-13T09:43:38.855310677Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 09:43:38.856062 containerd[1530]: time="2025-09-13T09:43:38.855998797Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 09:43:38.856160 containerd[1530]: time="2025-09-13T09:43:38.856109757Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 09:43:38.856277 containerd[1530]: time="2025-09-13T09:43:38.856246437Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 09:43:38.856502 containerd[1530]: time="2025-09-13T09:43:38.856476797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 09:43:38.856529 containerd[1530]: time="2025-09-13T09:43:38.856511917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 09:43:38.856529 containerd[1530]: time="2025-09-13T09:43:38.856524917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 09:43:38.856583 containerd[1530]: time="2025-09-13T09:43:38.856535237Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 09:43:38.856696 containerd[1530]: time="2025-09-13T09:43:38.856673677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 09:43:38.856717 containerd[1530]: time="2025-09-13T09:43:38.856700077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 09:43:38.856717 containerd[1530]: time="2025-09-13T09:43:38.856713797Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 09:43:38.856822 containerd[1530]: time="2025-09-13T09:43:38.856803837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 09:43:38.856841 containerd[1530]: time="2025-09-13T09:43:38.856827677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 09:43:38.856894 containerd[1530]: time="2025-09-13T09:43:38.856841077Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 09:43:38.856940 containerd[1530]: time="2025-09-13T09:43:38.856926637Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 09:43:38.857069 containerd[1530]: time="2025-09-13T09:43:38.857050317Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 09:43:38.857089 containerd[1530]: time="2025-09-13T09:43:38.857069917Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 09:43:38.857089 containerd[1530]: time="2025-09-13T09:43:38.857081197Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 09:43:38.857120 containerd[1530]: time="2025-09-13T09:43:38.857088677Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 09:43:38.857120 containerd[1530]: time="2025-09-13T09:43:38.857103637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 09:43:38.857120 containerd[1530]: time="2025-09-13T09:43:38.857114077Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 09:43:38.857202 containerd[1530]: time="2025-09-13T09:43:38.857191117Z" level=info msg="runtime interface created" Sep 13 09:43:38.857220 containerd[1530]: time="2025-09-13T09:43:38.857200157Z" level=info msg="created NRI interface" Sep 13 09:43:38.857220 containerd[1530]: time="2025-09-13T09:43:38.857212157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 09:43:38.857250 containerd[1530]: time="2025-09-13T09:43:38.857224397Z" level=info msg="Connect containerd service" Sep 13 09:43:38.857266 containerd[1530]: time="2025-09-13T09:43:38.857252597Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 09:43:38.859181 containerd[1530]: time="2025-09-13T09:43:38.859138517Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 09:43:38.925770 containerd[1530]: time="2025-09-13T09:43:38.925561357Z" level=info msg="Start subscribing containerd event" Sep 13 09:43:38.925770 containerd[1530]: time="2025-09-13T09:43:38.925634997Z" level=info msg="Start recovering state" Sep 13 09:43:38.925770 containerd[1530]: time="2025-09-13T09:43:38.925738317Z" level=info msg="Start event monitor" Sep 13 09:43:38.925889 containerd[1530]: time="2025-09-13T09:43:38.925824877Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 09:43:38.925889 containerd[1530]: time="2025-09-13T09:43:38.925869117Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 09:43:38.925953 containerd[1530]: time="2025-09-13T09:43:38.925939837Z" level=info msg="Start cni network conf syncer for default" Sep 13 09:43:38.925995 containerd[1530]: time="2025-09-13T09:43:38.925984157Z" level=info msg="Start streaming server" Sep 13 09:43:38.926065 containerd[1530]: time="2025-09-13T09:43:38.926053557Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 09:43:38.926107 containerd[1530]: time="2025-09-13T09:43:38.926096037Z" level=info msg="runtime interface starting up..." Sep 13 09:43:38.926145 containerd[1530]: time="2025-09-13T09:43:38.926135077Z" level=info msg="starting plugins..." Sep 13 09:43:38.926197 containerd[1530]: time="2025-09-13T09:43:38.926185957Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 09:43:38.926584 containerd[1530]: time="2025-09-13T09:43:38.926566517Z" level=info msg="containerd successfully booted in 0.098396s" Sep 13 09:43:38.926681 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 09:43:39.003638 tar[1527]: linux-arm64/README.md Sep 13 09:43:39.020701 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 09:43:39.677175 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 09:43:39.697602 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 09:43:39.699970 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 09:43:39.717859 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 09:43:39.718079 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 09:43:39.720392 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 09:43:39.741102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 09:43:39.743520 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 09:43:39.745311 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 09:43:39.746434 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 09:43:39.919759 systemd-networkd[1435]: eth0: Gained IPv6LL Sep 13 09:43:39.922045 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 09:43:39.924630 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 09:43:39.926735 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 09:43:39.931115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:43:39.945442 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 09:43:39.958877 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 09:43:39.959921 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 09:43:39.961298 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 09:43:39.962940 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 09:43:40.488964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:43:40.490287 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 09:43:40.492665 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 09:43:40.495675 systemd[1]: Startup finished in 1.992s (kernel) + 5.872s (initrd) + 3.558s (userspace) = 11.423s. Sep 13 09:43:40.838255 kubelet[1636]: E0913 09:43:40.838134 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 09:43:40.840435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 09:43:40.840608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 09:43:40.841099 systemd[1]: kubelet.service: Consumed 753ms CPU time, 257.7M memory peak. Sep 13 09:43:43.741404 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 09:43:43.742859 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:54014.service - OpenSSH per-connection server daemon (10.0.0.1:54014). Sep 13 09:43:43.803289 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 54014 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:43.804724 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:43.810217 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 09:43:43.811019 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 09:43:43.816068 systemd-logind[1509]: New session 1 of user core. Sep 13 09:43:43.832988 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 09:43:43.835156 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 09:43:43.854329 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 09:43:43.856296 systemd-logind[1509]: New session c1 of user core. Sep 13 09:43:43.960297 systemd[1656]: Queued start job for default target default.target. Sep 13 09:43:43.966450 systemd[1656]: Created slice app.slice - User Application Slice. Sep 13 09:43:43.966478 systemd[1656]: Reached target paths.target - Paths. Sep 13 09:43:43.966510 systemd[1656]: Reached target timers.target - Timers. Sep 13 09:43:43.967657 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 09:43:43.976619 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 09:43:43.976677 systemd[1656]: Reached target sockets.target - Sockets. Sep 13 09:43:43.976716 systemd[1656]: Reached target basic.target - Basic System. Sep 13 09:43:43.976743 systemd[1656]: Reached target default.target - Main User Target. Sep 13 09:43:43.976766 systemd[1656]: Startup finished in 115ms. Sep 13 09:43:43.976848 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 09:43:43.978010 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 09:43:44.045193 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:54022.service - OpenSSH per-connection server daemon (10.0.0.1:54022). Sep 13 09:43:44.096273 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 54022 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:44.097457 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:44.101121 systemd-logind[1509]: New session 2 of user core. Sep 13 09:43:44.112768 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 09:43:44.164035 sshd[1670]: Connection closed by 10.0.0.1 port 54022 Sep 13 09:43:44.164478 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 13 09:43:44.178476 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:54022.service: Deactivated successfully. Sep 13 09:43:44.179932 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 09:43:44.180723 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Sep 13 09:43:44.182840 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Sep 13 09:43:44.183829 systemd-logind[1509]: Removed session 2. Sep 13 09:43:44.235892 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:44.237079 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:44.241042 systemd-logind[1509]: New session 3 of user core. Sep 13 09:43:44.262702 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 09:43:44.311052 sshd[1679]: Connection closed by 10.0.0.1 port 54036 Sep 13 09:43:44.311304 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Sep 13 09:43:44.324345 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:54036.service: Deactivated successfully. Sep 13 09:43:44.326720 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 09:43:44.327282 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Sep 13 09:43:44.329216 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Sep 13 09:43:44.329682 systemd-logind[1509]: Removed session 3. Sep 13 09:43:44.382301 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:44.383326 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:44.386741 systemd-logind[1509]: New session 4 of user core. Sep 13 09:43:44.393685 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 09:43:44.443792 sshd[1688]: Connection closed by 10.0.0.1 port 54050 Sep 13 09:43:44.444062 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 13 09:43:44.454143 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:54050.service: Deactivated successfully. Sep 13 09:43:44.455346 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 09:43:44.456985 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Sep 13 09:43:44.458886 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:54066.service - OpenSSH per-connection server daemon (10.0.0.1:54066). Sep 13 09:43:44.459780 systemd-logind[1509]: Removed session 4. Sep 13 09:43:44.510977 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 54066 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:44.511938 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:44.516310 systemd-logind[1509]: New session 5 of user core. Sep 13 09:43:44.525675 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 09:43:44.581804 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 09:43:44.582065 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 09:43:44.592326 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 13 09:43:44.593639 sshd[1697]: Connection closed by 10.0.0.1 port 54066 Sep 13 09:43:44.594060 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 13 09:43:44.607450 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:54066.service: Deactivated successfully. Sep 13 09:43:44.610725 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 09:43:44.611312 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Sep 13 09:43:44.613164 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:54080.service - OpenSSH per-connection server daemon (10.0.0.1:54080). Sep 13 09:43:44.613885 systemd-logind[1509]: Removed session 5. Sep 13 09:43:44.656749 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 54080 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:44.657847 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:44.661222 systemd-logind[1509]: New session 6 of user core. Sep 13 09:43:44.675684 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 09:43:44.725311 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 09:43:44.725592 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 09:43:44.768614 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 13 09:43:44.773422 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 09:43:44.773700 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 09:43:44.782269 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 09:43:44.817505 augenrules[1731]: No rules Sep 13 09:43:44.818542 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 09:43:44.818775 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 09:43:44.819623 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 13 09:43:44.821354 sshd[1707]: Connection closed by 10.0.0.1 port 54080 Sep 13 09:43:44.821239 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 13 09:43:44.836327 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:54080.service: Deactivated successfully. Sep 13 09:43:44.838102 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 09:43:44.839039 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Sep 13 09:43:44.840681 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:54094.service - OpenSSH per-connection server daemon (10.0.0.1:54094). Sep 13 09:43:44.841423 systemd-logind[1509]: Removed session 6. Sep 13 09:43:44.896780 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 54094 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:43:44.897893 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:43:44.902259 systemd-logind[1509]: New session 7 of user core. Sep 13 09:43:44.913682 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 09:43:44.963224 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 09:43:44.963766 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 09:43:45.225481 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 09:43:45.247810 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 09:43:45.438768 dockerd[1765]: time="2025-09-13T09:43:45.438709077Z" level=info msg="Starting up" Sep 13 09:43:45.439639 dockerd[1765]: time="2025-09-13T09:43:45.439614957Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 09:43:45.450571 dockerd[1765]: time="2025-09-13T09:43:45.450517517Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 09:43:45.557214 dockerd[1765]: time="2025-09-13T09:43:45.557114717Z" level=info msg="Loading containers: start." Sep 13 09:43:45.564668 kernel: Initializing XFRM netlink socket Sep 13 09:43:45.745530 systemd-networkd[1435]: docker0: Link UP Sep 13 09:43:45.748687 dockerd[1765]: time="2025-09-13T09:43:45.748644677Z" level=info msg="Loading containers: done." Sep 13 09:43:45.762148 dockerd[1765]: time="2025-09-13T09:43:45.762093957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 09:43:45.762288 dockerd[1765]: time="2025-09-13T09:43:45.762183997Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 09:43:45.762288 dockerd[1765]: time="2025-09-13T09:43:45.762272437Z" level=info msg="Initializing buildkit" Sep 13 09:43:45.782792 dockerd[1765]: time="2025-09-13T09:43:45.782739677Z" level=info msg="Completed buildkit initialization" Sep 13 09:43:45.789349 dockerd[1765]: time="2025-09-13T09:43:45.789295677Z" level=info msg="Daemon has completed initialization" Sep 13 09:43:45.789467 dockerd[1765]: time="2025-09-13T09:43:45.789363077Z" level=info msg="API listen on /run/docker.sock" Sep 13 09:43:45.789676 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 09:43:46.467842 containerd[1530]: time="2025-09-13T09:43:46.467799317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 09:43:47.154208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228855686.mount: Deactivated successfully. Sep 13 09:43:48.155609 containerd[1530]: time="2025-09-13T09:43:48.155389117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:48.156042 containerd[1530]: time="2025-09-13T09:43:48.156013757Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 13 09:43:48.156796 containerd[1530]: time="2025-09-13T09:43:48.156762237Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:48.159169 containerd[1530]: time="2025-09-13T09:43:48.159124677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:48.160257 containerd[1530]: time="2025-09-13T09:43:48.160223357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.69237928s" Sep 13 09:43:48.160305 containerd[1530]: time="2025-09-13T09:43:48.160267757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 13 09:43:48.161636 containerd[1530]: time="2025-09-13T09:43:48.161606957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 09:43:49.176082 containerd[1530]: time="2025-09-13T09:43:49.176016237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:49.176720 containerd[1530]: time="2025-09-13T09:43:49.176684637Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 13 09:43:49.177281 containerd[1530]: time="2025-09-13T09:43:49.177256917Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:49.179913 containerd[1530]: time="2025-09-13T09:43:49.179875437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:49.181053 containerd[1530]: time="2025-09-13T09:43:49.181018797Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.01937792s" Sep 13 09:43:49.181242 containerd[1530]: time="2025-09-13T09:43:49.181136677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 13 09:43:49.181832 containerd[1530]: time="2025-09-13T09:43:49.181803437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 09:43:50.218649 containerd[1530]: time="2025-09-13T09:43:50.218576877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:50.219103 containerd[1530]: time="2025-09-13T09:43:50.219052797Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 13 09:43:50.220368 containerd[1530]: time="2025-09-13T09:43:50.220307077Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:50.222985 containerd[1530]: time="2025-09-13T09:43:50.222948757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:50.224143 containerd[1530]: time="2025-09-13T09:43:50.223926757Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.04208852s" Sep 13 09:43:50.224143 containerd[1530]: time="2025-09-13T09:43:50.223962877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 13 09:43:50.224517 containerd[1530]: time="2025-09-13T09:43:50.224484637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 09:43:50.937732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 09:43:50.939297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:43:51.091357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:43:51.108966 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 09:43:51.151963 kubelet[2060]: E0913 09:43:51.151892 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 09:43:51.155720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 09:43:51.155903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 09:43:51.156298 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.1M memory peak. Sep 13 09:43:51.214291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278120491.mount: Deactivated successfully. Sep 13 09:43:51.669313 containerd[1530]: time="2025-09-13T09:43:51.669202197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:51.670248 containerd[1530]: time="2025-09-13T09:43:51.670106077Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 13 09:43:51.671103 containerd[1530]: time="2025-09-13T09:43:51.671054917Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:51.673578 containerd[1530]: time="2025-09-13T09:43:51.672965157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:51.673674 containerd[1530]: time="2025-09-13T09:43:51.673641677Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.449114s" Sep 13 09:43:51.673701 containerd[1530]: time="2025-09-13T09:43:51.673675637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 09:43:51.674098 containerd[1530]: time="2025-09-13T09:43:51.674069877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 09:43:52.190412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319423913.mount: Deactivated successfully. Sep 13 09:43:52.916067 containerd[1530]: time="2025-09-13T09:43:52.916010917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:52.916964 containerd[1530]: time="2025-09-13T09:43:52.916921237Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 13 09:43:52.917566 containerd[1530]: time="2025-09-13T09:43:52.917515197Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:52.920360 containerd[1530]: time="2025-09-13T09:43:52.920323317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:52.921353 containerd[1530]: time="2025-09-13T09:43:52.921317437Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.24721636s" Sep 13 09:43:52.921353 containerd[1530]: time="2025-09-13T09:43:52.921348597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 13 09:43:52.921928 containerd[1530]: time="2025-09-13T09:43:52.921907197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 09:43:53.370209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442280548.mount: Deactivated successfully. Sep 13 09:43:53.374954 containerd[1530]: time="2025-09-13T09:43:53.374898597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 09:43:53.375329 containerd[1530]: time="2025-09-13T09:43:53.375286997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 13 09:43:53.376252 containerd[1530]: time="2025-09-13T09:43:53.376228477Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 09:43:53.379173 containerd[1530]: time="2025-09-13T09:43:53.379143557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 09:43:53.379666 containerd[1530]: time="2025-09-13T09:43:53.379624517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 457.68484ms" Sep 13 09:43:53.379666 containerd[1530]: time="2025-09-13T09:43:53.379661197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 09:43:53.380303 containerd[1530]: time="2025-09-13T09:43:53.380246037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 09:43:53.772181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234280530.mount: Deactivated successfully. Sep 13 09:43:55.334103 containerd[1530]: time="2025-09-13T09:43:55.334055637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:55.335095 containerd[1530]: time="2025-09-13T09:43:55.335065237Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 13 09:43:55.335796 containerd[1530]: time="2025-09-13T09:43:55.335763437Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:55.339045 containerd[1530]: time="2025-09-13T09:43:55.339004437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:43:55.340648 containerd[1530]: time="2025-09-13T09:43:55.340620157Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.96034404s" Sep 13 09:43:55.340706 containerd[1530]: time="2025-09-13T09:43:55.340651277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 13 09:44:00.464606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:44:00.465103 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.1M memory peak. Sep 13 09:44:00.467043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:44:00.484443 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... Sep 13 09:44:00.484455 systemd[1]: Reloading... Sep 13 09:44:00.556292 zram_generator::config[2255]: No configuration found. Sep 13 09:44:00.740605 systemd[1]: Reloading finished in 255 ms. Sep 13 09:44:00.772027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:44:00.773893 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:44:00.775915 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 09:44:00.776744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:44:00.776780 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Sep 13 09:44:00.778154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:44:00.904653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:44:00.908168 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 09:44:00.940402 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 09:44:00.940402 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 09:44:00.940402 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 09:44:00.940736 kubelet[2303]: I0913 09:44:00.940480 2303 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 09:44:02.111630 kubelet[2303]: I0913 09:44:02.111513 2303 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 09:44:02.111630 kubelet[2303]: I0913 09:44:02.111540 2303 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 09:44:02.111988 kubelet[2303]: I0913 09:44:02.111947 2303 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 09:44:02.130709 kubelet[2303]: E0913 09:44:02.130676 2303 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 09:44:02.131234 kubelet[2303]: I0913 09:44:02.131190 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 09:44:02.137890 kubelet[2303]: I0913 09:44:02.137869 2303 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 09:44:02.140444 kubelet[2303]: I0913 09:44:02.140428 2303 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 09:44:02.141434 kubelet[2303]: I0913 09:44:02.141391 2303 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 09:44:02.141583 kubelet[2303]: I0913 09:44:02.141425 2303 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 09:44:02.141671 kubelet[2303]: I0913 09:44:02.141650 2303 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 09:44:02.141671 kubelet[2303]: I0913 09:44:02.141659 2303 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 09:44:02.142364 kubelet[2303]: I0913 09:44:02.142330 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 13 09:44:02.144709 kubelet[2303]: I0913 09:44:02.144686 2303 kubelet.go:480] "Attempting to sync node with API server" Sep 13 09:44:02.144736 kubelet[2303]: I0913 09:44:02.144711 2303 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 09:44:02.144736 kubelet[2303]: I0913 09:44:02.144736 2303 kubelet.go:386] "Adding apiserver pod source" Sep 13 09:44:02.144768 kubelet[2303]: I0913 09:44:02.144747 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 09:44:02.145627 kubelet[2303]: I0913 09:44:02.145595 2303 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 09:44:02.146320 kubelet[2303]: I0913 09:44:02.146282 2303 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 09:44:02.146451 kubelet[2303]: W0913 09:44:02.146429 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 09:44:02.147080 kubelet[2303]: E0913 09:44:02.147039 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 09:44:02.149259 kubelet[2303]: I0913 09:44:02.149226 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 09:44:02.149315 kubelet[2303]: I0913 09:44:02.149273 2303 server.go:1289] "Started kubelet" Sep 13 09:44:02.149737 kubelet[2303]: E0913 09:44:02.149703 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 09:44:02.149776 kubelet[2303]: I0913 09:44:02.149750 2303 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 09:44:02.152512 kubelet[2303]: I0913 09:44:02.152325 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 09:44:02.152665 kubelet[2303]: I0913 09:44:02.152646 2303 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 09:44:02.153374 kubelet[2303]: E0913 09:44:02.152340 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ce5a60e39edd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 09:44:02.149244637 +0000 UTC m=+1.237907561,LastTimestamp:2025-09-13 09:44:02.149244637 +0000 UTC m=+1.237907561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 09:44:02.154107 kubelet[2303]: I0913 09:44:02.154084 2303 server.go:317] "Adding debug handlers to kubelet server" Sep 13 09:44:02.154856 kubelet[2303]: I0913 09:44:02.154838 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 09:44:02.155246 kubelet[2303]: I0913 09:44:02.155218 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 09:44:02.155979 kubelet[2303]: I0913 09:44:02.155951 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 09:44:02.156076 kubelet[2303]: E0913 09:44:02.156060 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 09:44:02.156301 kubelet[2303]: I0913 09:44:02.156276 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 09:44:02.156401 kubelet[2303]: I0913 09:44:02.156371 2303 reconciler.go:26] "Reconciler: start to sync state" Sep 13 09:44:02.157443 kubelet[2303]: E0913 09:44:02.157253 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 09:44:02.157443 kubelet[2303]: E0913 09:44:02.157264 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" Sep 13 09:44:02.157443 kubelet[2303]: E0913 09:44:02.157376 2303 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 09:44:02.159071 kubelet[2303]: I0913 09:44:02.159053 2303 factory.go:223] Registration of the systemd container factory successfully Sep 13 09:44:02.159324 kubelet[2303]: I0913 09:44:02.159305 2303 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 09:44:02.160182 kubelet[2303]: I0913 09:44:02.160162 2303 factory.go:223] Registration of the containerd container factory successfully Sep 13 09:44:02.168696 kubelet[2303]: I0913 09:44:02.168385 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 09:44:02.168696 kubelet[2303]: I0913 09:44:02.168399 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 09:44:02.168696 kubelet[2303]: I0913 09:44:02.168414 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 13 09:44:02.170789 kubelet[2303]: I0913 09:44:02.170752 2303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 09:44:02.171741 kubelet[2303]: I0913 09:44:02.171724 2303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 09:44:02.171817 kubelet[2303]: I0913 09:44:02.171808 2303 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 09:44:02.171891 kubelet[2303]: I0913 09:44:02.171871 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 09:44:02.171933 kubelet[2303]: I0913 09:44:02.171926 2303 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 09:44:02.172013 kubelet[2303]: E0913 09:44:02.171998 2303 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 09:44:02.254512 kubelet[2303]: I0913 09:44:02.254473 2303 policy_none.go:49] "None policy: Start" Sep 13 09:44:02.254512 kubelet[2303]: E0913 09:44:02.254482 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 09:44:02.254512 kubelet[2303]: I0913 09:44:02.254511 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 09:44:02.254669 kubelet[2303]: I0913 09:44:02.254527 2303 state_mem.go:35] "Initializing new in-memory state store" Sep 13 09:44:02.256145 kubelet[2303]: E0913 09:44:02.256124 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 09:44:02.259686 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 09:44:02.272305 kubelet[2303]: E0913 09:44:02.272267 2303 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 09:44:02.275809 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 09:44:02.278329 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 09:44:02.294305 kubelet[2303]: E0913 09:44:02.294250 2303 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 09:44:02.295077 kubelet[2303]: I0913 09:44:02.294448 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 09:44:02.295077 kubelet[2303]: I0913 09:44:02.294463 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 09:44:02.295077 kubelet[2303]: I0913 09:44:02.294683 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 09:44:02.296032 kubelet[2303]: E0913 09:44:02.296009 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 09:44:02.296032 kubelet[2303]: E0913 09:44:02.296047 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 09:44:02.357787 kubelet[2303]: E0913 09:44:02.357761 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" Sep 13 09:44:02.396996 kubelet[2303]: I0913 09:44:02.396676 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 09:44:02.397153 kubelet[2303]: E0913 09:44:02.397128 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Sep 13 09:44:02.483416 systemd[1]: Created slice kubepods-burstable-podbd044250ce226706855c17fa7a0c2884.slice - libcontainer container kubepods-burstable-podbd044250ce226706855c17fa7a0c2884.slice. Sep 13 09:44:02.502300 kubelet[2303]: E0913 09:44:02.502219 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:02.504219 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 09:44:02.505956 kubelet[2303]: E0913 09:44:02.505937 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:02.508005 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 09:44:02.509328 kubelet[2303]: E0913 09:44:02.509303 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:02.558623 kubelet[2303]: I0913 09:44:02.558603 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd044250ce226706855c17fa7a0c2884-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bd044250ce226706855c17fa7a0c2884\") " pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:02.558623 kubelet[2303]: I0913 09:44:02.558630 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:02.558802 kubelet[2303]: I0913 09:44:02.558658 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:02.558802 kubelet[2303]: I0913 09:44:02.558675 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:02.558802 kubelet[2303]: I0913 09:44:02.558697 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:02.558802 kubelet[2303]: I0913 09:44:02.558713 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 09:44:02.558802 kubelet[2303]: I0913 09:44:02.558745 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd044250ce226706855c17fa7a0c2884-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd044250ce226706855c17fa7a0c2884\") " pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:02.558940 kubelet[2303]: I0913 09:44:02.558768 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd044250ce226706855c17fa7a0c2884-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd044250ce226706855c17fa7a0c2884\") " pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:02.558940 kubelet[2303]: I0913 09:44:02.558784 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:02.598643 kubelet[2303]: I0913 09:44:02.598617 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 09:44:02.599015 kubelet[2303]: E0913 09:44:02.598985 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Sep 13 09:44:02.758431 kubelet[2303]: E0913 09:44:02.758341 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" Sep 13 09:44:02.804397 containerd[1530]: time="2025-09-13T09:44:02.804351357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bd044250ce226706855c17fa7a0c2884,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:02.807113 containerd[1530]: time="2025-09-13T09:44:02.806900077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:02.811210 containerd[1530]: time="2025-09-13T09:44:02.811182917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:02.830868 containerd[1530]: time="2025-09-13T09:44:02.830752437Z" level=info msg="connecting to shim 50f1871d03a4a91e03cb48e31610aedcc2c6f9708e080e0a46ab137139084311" address="unix:///run/containerd/s/a3da7ffae04c0b909d9c156e51f50b8d3283dfc2cb0575afbddac342942c11eb" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:02.836666 containerd[1530]: time="2025-09-13T09:44:02.836530397Z" level=info msg="connecting to shim f2aae8d8d18cb37bba7de1e52b69d879bd98bd666f532fab77df3f104fd6e5b6" address="unix:///run/containerd/s/88226a65f7fd0285f4c5e2058c9b428afe0d151e2f0f17450ca9cb7f65a0a694" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:02.841594 containerd[1530]: time="2025-09-13T09:44:02.841532677Z" level=info msg="connecting to shim d513978f0760cbba3d1b6ba344649cd245986c602b4d2054284f3aaa488c32a2" address="unix:///run/containerd/s/8704957c04c3bb82cd42f22e862ed1480ba9af01163c57661b2da9b59c5c7c18" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:02.864713 systemd[1]: Started cri-containerd-50f1871d03a4a91e03cb48e31610aedcc2c6f9708e080e0a46ab137139084311.scope - libcontainer container 50f1871d03a4a91e03cb48e31610aedcc2c6f9708e080e0a46ab137139084311. Sep 13 09:44:02.868064 systemd[1]: Started cri-containerd-d513978f0760cbba3d1b6ba344649cd245986c602b4d2054284f3aaa488c32a2.scope - libcontainer container d513978f0760cbba3d1b6ba344649cd245986c602b4d2054284f3aaa488c32a2. Sep 13 09:44:02.869591 systemd[1]: Started cri-containerd-f2aae8d8d18cb37bba7de1e52b69d879bd98bd666f532fab77df3f104fd6e5b6.scope - libcontainer container f2aae8d8d18cb37bba7de1e52b69d879bd98bd666f532fab77df3f104fd6e5b6. Sep 13 09:44:02.908114 containerd[1530]: time="2025-09-13T09:44:02.908066157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d513978f0760cbba3d1b6ba344649cd245986c602b4d2054284f3aaa488c32a2\"" Sep 13 09:44:02.911179 containerd[1530]: time="2025-09-13T09:44:02.911147517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bd044250ce226706855c17fa7a0c2884,Namespace:kube-system,Attempt:0,} returns sandbox id \"50f1871d03a4a91e03cb48e31610aedcc2c6f9708e080e0a46ab137139084311\"" Sep 13 09:44:02.912606 containerd[1530]: time="2025-09-13T09:44:02.912578237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2aae8d8d18cb37bba7de1e52b69d879bd98bd666f532fab77df3f104fd6e5b6\"" Sep 13 09:44:02.914128 containerd[1530]: time="2025-09-13T09:44:02.914099917Z" level=info msg="CreateContainer within sandbox \"d513978f0760cbba3d1b6ba344649cd245986c602b4d2054284f3aaa488c32a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 09:44:02.915419 containerd[1530]: time="2025-09-13T09:44:02.915277877Z" level=info msg="CreateContainer within sandbox \"50f1871d03a4a91e03cb48e31610aedcc2c6f9708e080e0a46ab137139084311\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 09:44:02.916941 containerd[1530]: time="2025-09-13T09:44:02.916907357Z" level=info msg="CreateContainer within sandbox \"f2aae8d8d18cb37bba7de1e52b69d879bd98bd666f532fab77df3f104fd6e5b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 09:44:02.922934 containerd[1530]: time="2025-09-13T09:44:02.922905597Z" level=info msg="Container d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:02.926217 containerd[1530]: time="2025-09-13T09:44:02.926176597Z" level=info msg="Container 5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:02.928165 containerd[1530]: time="2025-09-13T09:44:02.928134037Z" level=info msg="Container 569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:02.931923 containerd[1530]: time="2025-09-13T09:44:02.931892157Z" level=info msg="CreateContainer within sandbox \"d513978f0760cbba3d1b6ba344649cd245986c602b4d2054284f3aaa488c32a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c\"" Sep 13 09:44:02.933612 containerd[1530]: time="2025-09-13T09:44:02.932488437Z" level=info msg="StartContainer for \"d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c\"" Sep 13 09:44:02.933612 containerd[1530]: time="2025-09-13T09:44:02.933412237Z" level=info msg="CreateContainer within sandbox \"50f1871d03a4a91e03cb48e31610aedcc2c6f9708e080e0a46ab137139084311\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344\"" Sep 13 09:44:02.933612 containerd[1530]: time="2025-09-13T09:44:02.933453757Z" level=info msg="connecting to shim d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c" address="unix:///run/containerd/s/8704957c04c3bb82cd42f22e862ed1480ba9af01163c57661b2da9b59c5c7c18" protocol=ttrpc version=3 Sep 13 09:44:02.933781 containerd[1530]: time="2025-09-13T09:44:02.933756117Z" level=info msg="StartContainer for \"5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344\"" Sep 13 09:44:02.935168 containerd[1530]: time="2025-09-13T09:44:02.935143197Z" level=info msg="connecting to shim 5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344" address="unix:///run/containerd/s/a3da7ffae04c0b909d9c156e51f50b8d3283dfc2cb0575afbddac342942c11eb" protocol=ttrpc version=3 Sep 13 09:44:02.937017 containerd[1530]: time="2025-09-13T09:44:02.936712237Z" level=info msg="CreateContainer within sandbox \"f2aae8d8d18cb37bba7de1e52b69d879bd98bd666f532fab77df3f104fd6e5b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5\"" Sep 13 09:44:02.937226 containerd[1530]: time="2025-09-13T09:44:02.937193797Z" level=info msg="StartContainer for \"569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5\"" Sep 13 09:44:02.938181 containerd[1530]: time="2025-09-13T09:44:02.938153277Z" level=info msg="connecting to shim 569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5" address="unix:///run/containerd/s/88226a65f7fd0285f4c5e2058c9b428afe0d151e2f0f17450ca9cb7f65a0a694" protocol=ttrpc version=3 Sep 13 09:44:02.954694 systemd[1]: Started cri-containerd-5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344.scope - libcontainer container 5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344. Sep 13 09:44:02.955802 systemd[1]: Started cri-containerd-d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c.scope - libcontainer container d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c. Sep 13 09:44:02.959133 systemd[1]: Started cri-containerd-569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5.scope - libcontainer container 569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5. Sep 13 09:44:03.001325 kubelet[2303]: I0913 09:44:03.001295 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 09:44:03.003097 kubelet[2303]: E0913 09:44:03.002996 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Sep 13 09:44:03.005813 containerd[1530]: time="2025-09-13T09:44:03.005775797Z" level=info msg="StartContainer for \"d7253ef65d9f051d54aaa78cf378ab89e6b947a2d62c9b59c953520604260f4c\" returns successfully" Sep 13 09:44:03.006625 containerd[1530]: time="2025-09-13T09:44:03.006475997Z" level=info msg="StartContainer for \"569437ea7cd353d7d8e2a7a69ee3ef314bd20c36e3348cf0f8aab384f34009d5\" returns successfully" Sep 13 09:44:03.007305 containerd[1530]: time="2025-09-13T09:44:03.006993477Z" level=info msg="StartContainer for \"5e3ad7ab2734784863f1a944f204a016279ccdb7f1f6060fb46373247a9e6344\" returns successfully" Sep 13 09:44:03.181989 kubelet[2303]: E0913 09:44:03.181896 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:03.186778 kubelet[2303]: E0913 09:44:03.186648 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:03.187069 kubelet[2303]: E0913 09:44:03.187054 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:03.804756 kubelet[2303]: I0913 09:44:03.804730 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 09:44:04.189743 kubelet[2303]: E0913 09:44:04.188801 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:04.189743 kubelet[2303]: E0913 09:44:04.189577 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:04.190296 kubelet[2303]: E0913 09:44:04.190150 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 09:44:04.907712 kubelet[2303]: E0913 09:44:04.907662 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 09:44:04.985571 kubelet[2303]: I0913 09:44:04.985517 2303 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 09:44:05.056981 kubelet[2303]: I0913 09:44:05.056932 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:05.064542 kubelet[2303]: E0913 09:44:05.064514 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:05.064542 kubelet[2303]: I0913 09:44:05.064538 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:05.066106 kubelet[2303]: E0913 09:44:05.066077 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:05.066169 kubelet[2303]: I0913 09:44:05.066113 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 09:44:05.068056 kubelet[2303]: E0913 09:44:05.068016 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 09:44:05.148473 kubelet[2303]: I0913 09:44:05.148441 2303 apiserver.go:52] "Watching apiserver" Sep 13 09:44:05.156986 kubelet[2303]: I0913 09:44:05.156949 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 09:44:05.188517 kubelet[2303]: I0913 09:44:05.188441 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:05.190602 kubelet[2303]: E0913 09:44:05.190577 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:06.846853 systemd[1]: Reload requested from client PID 2583 ('systemctl') (unit session-7.scope)... Sep 13 09:44:06.846867 systemd[1]: Reloading... Sep 13 09:44:06.904582 zram_generator::config[2626]: No configuration found. Sep 13 09:44:07.145785 systemd[1]: Reloading finished in 298 ms. Sep 13 09:44:07.167409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:44:07.179788 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 09:44:07.180642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:44:07.180705 systemd[1]: kubelet.service: Consumed 1.598s CPU time, 127.8M memory peak. Sep 13 09:44:07.182291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 09:44:07.302765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 09:44:07.307903 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 09:44:07.346019 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 09:44:07.346019 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 09:44:07.346019 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 09:44:07.347028 kubelet[2668]: I0913 09:44:07.346737 2668 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 09:44:07.352440 kubelet[2668]: I0913 09:44:07.352397 2668 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 09:44:07.352440 kubelet[2668]: I0913 09:44:07.352424 2668 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 09:44:07.352664 kubelet[2668]: I0913 09:44:07.352648 2668 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 09:44:07.353873 kubelet[2668]: I0913 09:44:07.353844 2668 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 09:44:07.356102 kubelet[2668]: I0913 09:44:07.356080 2668 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 09:44:07.360213 kubelet[2668]: I0913 09:44:07.360193 2668 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 09:44:07.362850 kubelet[2668]: I0913 09:44:07.362826 2668 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 09:44:07.363145 kubelet[2668]: I0913 09:44:07.363116 2668 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 09:44:07.363370 kubelet[2668]: I0913 09:44:07.363206 2668 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 09:44:07.363497 kubelet[2668]: I0913 09:44:07.363483 2668 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 09:44:07.363567 kubelet[2668]: I0913 09:44:07.363542 2668 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 09:44:07.363664 kubelet[2668]: I0913 09:44:07.363652 2668 state_mem.go:36] "Initialized new in-memory state store" Sep 13 09:44:07.363882 kubelet[2668]: I0913 09:44:07.363866 2668 kubelet.go:480] "Attempting to sync node with API server" Sep 13 09:44:07.363973 kubelet[2668]: I0913 09:44:07.363958 2668 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 09:44:07.364058 kubelet[2668]: I0913 09:44:07.364048 2668 kubelet.go:386] "Adding apiserver pod source" Sep 13 09:44:07.364115 kubelet[2668]: I0913 09:44:07.364106 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 09:44:07.365694 kubelet[2668]: I0913 09:44:07.365671 2668 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 09:44:07.366791 kubelet[2668]: I0913 09:44:07.366768 2668 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 09:44:07.369062 kubelet[2668]: I0913 09:44:07.369044 2668 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 09:44:07.369186 kubelet[2668]: I0913 09:44:07.369175 2668 server.go:1289] "Started kubelet" Sep 13 09:44:07.369280 kubelet[2668]: I0913 09:44:07.369228 2668 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 09:44:07.369398 kubelet[2668]: I0913 09:44:07.369348 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 09:44:07.369787 kubelet[2668]: I0913 09:44:07.369766 2668 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 09:44:07.370177 kubelet[2668]: I0913 09:44:07.370141 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 09:44:07.370665 kubelet[2668]: I0913 09:44:07.370648 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 09:44:07.371814 kubelet[2668]: I0913 09:44:07.371784 2668 server.go:317] "Adding debug handlers to kubelet server" Sep 13 09:44:07.372533 kubelet[2668]: I0913 09:44:07.372363 2668 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 09:44:07.372608 kubelet[2668]: I0913 09:44:07.372575 2668 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 09:44:07.372651 kubelet[2668]: E0913 09:44:07.372142 2668 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 09:44:07.372724 kubelet[2668]: I0913 09:44:07.372705 2668 reconciler.go:26] "Reconciler: start to sync state" Sep 13 09:44:07.375812 kubelet[2668]: I0913 09:44:07.374121 2668 factory.go:223] Registration of the systemd container factory successfully Sep 13 09:44:07.375812 kubelet[2668]: I0913 09:44:07.374214 2668 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 09:44:07.389946 kubelet[2668]: I0913 09:44:07.389903 2668 factory.go:223] Registration of the containerd container factory successfully Sep 13 09:44:07.390704 kubelet[2668]: I0913 09:44:07.390675 2668 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 09:44:07.398608 kubelet[2668]: I0913 09:44:07.398489 2668 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 09:44:07.398608 kubelet[2668]: I0913 09:44:07.398516 2668 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 09:44:07.398608 kubelet[2668]: I0913 09:44:07.398535 2668 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 09:44:07.398608 kubelet[2668]: I0913 09:44:07.398541 2668 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 09:44:07.399422 kubelet[2668]: E0913 09:44:07.399371 2668 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427080 2668 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427100 2668 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427121 2668 state_mem.go:36] "Initialized new in-memory state store" Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427236 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427245 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427261 2668 policy_none.go:49] "None policy: Start" Sep 13 09:44:07.427260 kubelet[2668]: I0913 09:44:07.427280 2668 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 09:44:07.427584 kubelet[2668]: I0913 09:44:07.427291 2668 state_mem.go:35] "Initializing new in-memory state store" Sep 13 09:44:07.427584 kubelet[2668]: I0913 09:44:07.427367 2668 state_mem.go:75] "Updated machine memory state" Sep 13 09:44:07.431035 kubelet[2668]: E0913 09:44:07.431015 2668 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 09:44:07.431192 kubelet[2668]: I0913 09:44:07.431175 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 09:44:07.431219 kubelet[2668]: I0913 09:44:07.431192 2668 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 09:44:07.432017 kubelet[2668]: E0913 09:44:07.431985 2668 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 09:44:07.432207 kubelet[2668]: I0913 09:44:07.432064 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 09:44:07.500463 kubelet[2668]: I0913 09:44:07.500423 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:07.500708 kubelet[2668]: I0913 09:44:07.500684 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 09:44:07.500822 kubelet[2668]: I0913 09:44:07.500806 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:07.532518 kubelet[2668]: I0913 09:44:07.532492 2668 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 09:44:07.538934 kubelet[2668]: I0913 09:44:07.538898 2668 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 09:44:07.538999 kubelet[2668]: I0913 09:44:07.538980 2668 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 09:44:07.574573 kubelet[2668]: I0913 09:44:07.574367 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:07.574573 kubelet[2668]: I0913 09:44:07.574407 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd044250ce226706855c17fa7a0c2884-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd044250ce226706855c17fa7a0c2884\") " pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:07.574573 kubelet[2668]: I0913 09:44:07.574427 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd044250ce226706855c17fa7a0c2884-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd044250ce226706855c17fa7a0c2884\") " pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:07.574573 kubelet[2668]: I0913 09:44:07.574444 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:07.574573 kubelet[2668]: I0913 09:44:07.574463 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:07.574757 kubelet[2668]: I0913 09:44:07.574480 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:07.574757 kubelet[2668]: I0913 09:44:07.574498 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 09:44:07.574757 kubelet[2668]: I0913 09:44:07.574514 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd044250ce226706855c17fa7a0c2884-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bd044250ce226706855c17fa7a0c2884\") " pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:07.574757 kubelet[2668]: I0913 09:44:07.574530 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 09:44:08.364467 kubelet[2668]: I0913 09:44:08.364412 2668 apiserver.go:52] "Watching apiserver" Sep 13 09:44:08.373569 kubelet[2668]: I0913 09:44:08.373343 2668 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 09:44:08.418163 kubelet[2668]: I0913 09:44:08.417861 2668 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:08.424371 kubelet[2668]: E0913 09:44:08.424199 2668 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 09:44:08.435739 kubelet[2668]: I0913 09:44:08.435691 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.435652757 podStartE2EDuration="1.435652757s" podCreationTimestamp="2025-09-13 09:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:08.434991117 +0000 UTC m=+1.122622121" watchObservedRunningTime="2025-09-13 09:44:08.435652757 +0000 UTC m=+1.123283761" Sep 13 09:44:08.443356 kubelet[2668]: I0913 09:44:08.442990 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.442978517 podStartE2EDuration="1.442978517s" podCreationTimestamp="2025-09-13 09:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:08.442342957 +0000 UTC m=+1.129973961" watchObservedRunningTime="2025-09-13 09:44:08.442978517 +0000 UTC m=+1.130609521" Sep 13 09:44:08.460572 kubelet[2668]: I0913 09:44:08.460290 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.460276597 podStartE2EDuration="1.460276597s" podCreationTimestamp="2025-09-13 09:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:08.451337437 +0000 UTC m=+1.138968481" watchObservedRunningTime="2025-09-13 09:44:08.460276597 +0000 UTC m=+1.147907601" Sep 13 09:44:12.768731 kubelet[2668]: I0913 09:44:12.768700 2668 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 09:44:12.769422 containerd[1530]: time="2025-09-13T09:44:12.769313074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 09:44:12.769714 kubelet[2668]: I0913 09:44:12.769473 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 09:44:13.616916 systemd[1]: Created slice kubepods-besteffort-pod31b40fc2_b99a_4b8b_ab71_e0bab608b381.slice - libcontainer container kubepods-besteffort-pod31b40fc2_b99a_4b8b_ab71_e0bab608b381.slice. Sep 13 09:44:13.716595 kubelet[2668]: I0913 09:44:13.716567 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31b40fc2-b99a-4b8b-ab71-e0bab608b381-kube-proxy\") pod \"kube-proxy-44p59\" (UID: \"31b40fc2-b99a-4b8b-ab71-e0bab608b381\") " pod="kube-system/kube-proxy-44p59" Sep 13 09:44:13.716699 kubelet[2668]: I0913 09:44:13.716599 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31b40fc2-b99a-4b8b-ab71-e0bab608b381-xtables-lock\") pod \"kube-proxy-44p59\" (UID: \"31b40fc2-b99a-4b8b-ab71-e0bab608b381\") " pod="kube-system/kube-proxy-44p59" Sep 13 09:44:13.716699 kubelet[2668]: I0913 09:44:13.716620 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31b40fc2-b99a-4b8b-ab71-e0bab608b381-lib-modules\") pod \"kube-proxy-44p59\" (UID: \"31b40fc2-b99a-4b8b-ab71-e0bab608b381\") " pod="kube-system/kube-proxy-44p59" Sep 13 09:44:13.716699 kubelet[2668]: I0913 09:44:13.716646 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrpl\" (UniqueName: \"kubernetes.io/projected/31b40fc2-b99a-4b8b-ab71-e0bab608b381-kube-api-access-fsrpl\") pod \"kube-proxy-44p59\" (UID: \"31b40fc2-b99a-4b8b-ab71-e0bab608b381\") " pod="kube-system/kube-proxy-44p59" Sep 13 09:44:13.780179 systemd[1]: Created slice kubepods-besteffort-pod1bee8eec_2bc9_4bb9_84e5_94418bcac5b0.slice - libcontainer container kubepods-besteffort-pod1bee8eec_2bc9_4bb9_84e5_94418bcac5b0.slice. Sep 13 09:44:13.817085 kubelet[2668]: I0913 09:44:13.817023 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bee8eec-2bc9-4bb9-84e5-94418bcac5b0-var-lib-calico\") pod \"tigera-operator-755d956888-bcjfk\" (UID: \"1bee8eec-2bc9-4bb9-84e5-94418bcac5b0\") " pod="tigera-operator/tigera-operator-755d956888-bcjfk" Sep 13 09:44:13.817420 kubelet[2668]: I0913 09:44:13.817098 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlv6\" (UniqueName: \"kubernetes.io/projected/1bee8eec-2bc9-4bb9-84e5-94418bcac5b0-kube-api-access-bvlv6\") pod \"tigera-operator-755d956888-bcjfk\" (UID: \"1bee8eec-2bc9-4bb9-84e5-94418bcac5b0\") " pod="tigera-operator/tigera-operator-755d956888-bcjfk" Sep 13 09:44:13.928582 containerd[1530]: time="2025-09-13T09:44:13.928471689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44p59,Uid:31b40fc2-b99a-4b8b-ab71-e0bab608b381,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:13.948697 containerd[1530]: time="2025-09-13T09:44:13.948642765Z" level=info msg="connecting to shim fc8d62f7fc3ae28d519082af28da52846d8a7130f07175106b1c2c069c808114" address="unix:///run/containerd/s/226ad4258d079ef83af5ca5a8dd42324e46b3d5adb9638606b9afe5728f3fbc2" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:13.968686 systemd[1]: Started cri-containerd-fc8d62f7fc3ae28d519082af28da52846d8a7130f07175106b1c2c069c808114.scope - libcontainer container fc8d62f7fc3ae28d519082af28da52846d8a7130f07175106b1c2c069c808114. Sep 13 09:44:13.989043 containerd[1530]: time="2025-09-13T09:44:13.989004117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44p59,Uid:31b40fc2-b99a-4b8b-ab71-e0bab608b381,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8d62f7fc3ae28d519082af28da52846d8a7130f07175106b1c2c069c808114\"" Sep 13 09:44:13.994048 containerd[1530]: time="2025-09-13T09:44:13.993707055Z" level=info msg="CreateContainer within sandbox \"fc8d62f7fc3ae28d519082af28da52846d8a7130f07175106b1c2c069c808114\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 09:44:14.005188 containerd[1530]: time="2025-09-13T09:44:14.005158657Z" level=info msg="Container 19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:14.011391 containerd[1530]: time="2025-09-13T09:44:14.011350278Z" level=info msg="CreateContainer within sandbox \"fc8d62f7fc3ae28d519082af28da52846d8a7130f07175106b1c2c069c808114\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c\"" Sep 13 09:44:14.012042 containerd[1530]: time="2025-09-13T09:44:14.011968561Z" level=info msg="StartContainer for \"19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c\"" Sep 13 09:44:14.013251 containerd[1530]: time="2025-09-13T09:44:14.013211605Z" level=info msg="connecting to shim 19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c" address="unix:///run/containerd/s/226ad4258d079ef83af5ca5a8dd42324e46b3d5adb9638606b9afe5728f3fbc2" protocol=ttrpc version=3 Sep 13 09:44:14.041703 systemd[1]: Started cri-containerd-19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c.scope - libcontainer container 19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c. Sep 13 09:44:14.071597 containerd[1530]: time="2025-09-13T09:44:14.071484090Z" level=info msg="StartContainer for \"19f7b8203b97997a5b7dcaf6240d3d0bbab35c3ba01a6b0351d09a21fe9aa92c\" returns successfully" Sep 13 09:44:14.083145 containerd[1530]: time="2025-09-13T09:44:14.083111651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bcjfk,Uid:1bee8eec-2bc9-4bb9-84e5-94418bcac5b0,Namespace:tigera-operator,Attempt:0,}" Sep 13 09:44:14.098910 containerd[1530]: time="2025-09-13T09:44:14.098820467Z" level=info msg="connecting to shim e435346981bfede9ed499a29873a81caad2d6c06076facf4d06c4b08df45145a" address="unix:///run/containerd/s/237051a4b5beaf19077c230e19bf86e2ac12483f53746233cd82176c80b073bb" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:14.128702 systemd[1]: Started cri-containerd-e435346981bfede9ed499a29873a81caad2d6c06076facf4d06c4b08df45145a.scope - libcontainer container e435346981bfede9ed499a29873a81caad2d6c06076facf4d06c4b08df45145a. Sep 13 09:44:14.158360 containerd[1530]: time="2025-09-13T09:44:14.158321116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bcjfk,Uid:1bee8eec-2bc9-4bb9-84e5-94418bcac5b0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e435346981bfede9ed499a29873a81caad2d6c06076facf4d06c4b08df45145a\"" Sep 13 09:44:14.159585 containerd[1530]: time="2025-09-13T09:44:14.159545961Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 09:44:14.829382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670785584.mount: Deactivated successfully. Sep 13 09:44:15.133045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171202087.mount: Deactivated successfully. Sep 13 09:44:15.420384 containerd[1530]: time="2025-09-13T09:44:15.420218311Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:15.420984 containerd[1530]: time="2025-09-13T09:44:15.420955633Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 09:44:15.421626 containerd[1530]: time="2025-09-13T09:44:15.421601635Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:15.424001 containerd[1530]: time="2025-09-13T09:44:15.423973883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:15.424845 containerd[1530]: time="2025-09-13T09:44:15.424818526Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.265237005s" Sep 13 09:44:15.424889 containerd[1530]: time="2025-09-13T09:44:15.424851486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 09:44:15.428568 containerd[1530]: time="2025-09-13T09:44:15.428526098Z" level=info msg="CreateContainer within sandbox \"e435346981bfede9ed499a29873a81caad2d6c06076facf4d06c4b08df45145a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 09:44:15.439578 containerd[1530]: time="2025-09-13T09:44:15.438936492Z" level=info msg="Container a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:15.445521 containerd[1530]: time="2025-09-13T09:44:15.445474114Z" level=info msg="CreateContainer within sandbox \"e435346981bfede9ed499a29873a81caad2d6c06076facf4d06c4b08df45145a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f\"" Sep 13 09:44:15.446004 containerd[1530]: time="2025-09-13T09:44:15.445942956Z" level=info msg="StartContainer for \"a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f\"" Sep 13 09:44:15.446732 containerd[1530]: time="2025-09-13T09:44:15.446703598Z" level=info msg="connecting to shim a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f" address="unix:///run/containerd/s/237051a4b5beaf19077c230e19bf86e2ac12483f53746233cd82176c80b073bb" protocol=ttrpc version=3 Sep 13 09:44:15.464700 systemd[1]: Started cri-containerd-a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f.scope - libcontainer container a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f. Sep 13 09:44:15.487229 containerd[1530]: time="2025-09-13T09:44:15.487195852Z" level=info msg="StartContainer for \"a984a334dd551c38e7a88947486a3ed255805fc13a543db69687c71f5740584f\" returns successfully" Sep 13 09:44:16.401476 kubelet[2668]: I0913 09:44:16.401417 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-44p59" podStartSLOduration=3.401402149 podStartE2EDuration="3.401402149s" podCreationTimestamp="2025-09-13 09:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:14.438506344 +0000 UTC m=+7.126137348" watchObservedRunningTime="2025-09-13 09:44:16.401402149 +0000 UTC m=+9.089033113" Sep 13 09:44:16.463610 kubelet[2668]: I0913 09:44:16.463534 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-bcjfk" podStartSLOduration=2.197508575 podStartE2EDuration="3.463518182s" podCreationTimestamp="2025-09-13 09:44:13 +0000 UTC" firstStartedPulling="2025-09-13 09:44:14.15932192 +0000 UTC m=+6.846952924" lastFinishedPulling="2025-09-13 09:44:15.425331527 +0000 UTC m=+8.112962531" observedRunningTime="2025-09-13 09:44:16.445518126 +0000 UTC m=+9.133149130" watchObservedRunningTime="2025-09-13 09:44:16.463518182 +0000 UTC m=+9.151149146" Sep 13 09:44:20.717351 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 13 09:44:20.718928 sshd[1743]: Connection closed by 10.0.0.1 port 54094 Sep 13 09:44:20.721879 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 13 09:44:20.728385 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:54094.service: Deactivated successfully. Sep 13 09:44:20.731321 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 09:44:20.732649 systemd[1]: session-7.scope: Consumed 6.833s CPU time, 222.6M memory peak. Sep 13 09:44:20.736778 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Sep 13 09:44:20.739291 systemd-logind[1509]: Removed session 7. Sep 13 09:44:24.075722 update_engine[1515]: I20250913 09:44:24.075575 1515 update_attempter.cc:509] Updating boot flags... Sep 13 09:44:26.066470 systemd[1]: Created slice kubepods-besteffort-poda7f1bf8f_52cd_4961_b6b2_b3699c49944c.slice - libcontainer container kubepods-besteffort-poda7f1bf8f_52cd_4961_b6b2_b3699c49944c.slice. Sep 13 09:44:26.102340 kubelet[2668]: I0913 09:44:26.102300 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f1bf8f-52cd-4961-b6b2-b3699c49944c-tigera-ca-bundle\") pod \"calico-typha-676b64d6dc-t8vrx\" (UID: \"a7f1bf8f-52cd-4961-b6b2-b3699c49944c\") " pod="calico-system/calico-typha-676b64d6dc-t8vrx" Sep 13 09:44:26.104322 kubelet[2668]: I0913 09:44:26.104223 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sddw5\" (UniqueName: \"kubernetes.io/projected/a7f1bf8f-52cd-4961-b6b2-b3699c49944c-kube-api-access-sddw5\") pod \"calico-typha-676b64d6dc-t8vrx\" (UID: \"a7f1bf8f-52cd-4961-b6b2-b3699c49944c\") " pod="calico-system/calico-typha-676b64d6dc-t8vrx" Sep 13 09:44:26.104322 kubelet[2668]: I0913 09:44:26.104272 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a7f1bf8f-52cd-4961-b6b2-b3699c49944c-typha-certs\") pod \"calico-typha-676b64d6dc-t8vrx\" (UID: \"a7f1bf8f-52cd-4961-b6b2-b3699c49944c\") " pod="calico-system/calico-typha-676b64d6dc-t8vrx" Sep 13 09:44:26.359351 systemd[1]: Created slice kubepods-besteffort-poda0d0bbac_0402_4a0d_8e5f_32e57deb1596.slice - libcontainer container kubepods-besteffort-poda0d0bbac_0402_4a0d_8e5f_32e57deb1596.slice. Sep 13 09:44:26.370090 containerd[1530]: time="2025-09-13T09:44:26.369842552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-676b64d6dc-t8vrx,Uid:a7f1bf8f-52cd-4961-b6b2-b3699c49944c,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:26.399168 containerd[1530]: time="2025-09-13T09:44:26.399127840Z" level=info msg="connecting to shim f954af86220e9a895fd2d1b4596f7c135ff48f79fa3b827461353b5e893e21f2" address="unix:///run/containerd/s/d672ce7e7f2261291ca4f5ebe9d414657b1d39f44e39760fc88746f3c9f64dcf" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:26.411013 kubelet[2668]: I0913 09:44:26.410753 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-tigera-ca-bundle\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411013 kubelet[2668]: I0913 09:44:26.410808 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-flexvol-driver-host\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411013 kubelet[2668]: I0913 09:44:26.410841 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-var-run-calico\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411013 kubelet[2668]: I0913 09:44:26.410872 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-var-lib-calico\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411013 kubelet[2668]: I0913 09:44:26.410888 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-xtables-lock\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411185 kubelet[2668]: I0913 09:44:26.410905 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-cni-net-dir\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411185 kubelet[2668]: I0913 09:44:26.410921 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-node-certs\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411185 kubelet[2668]: I0913 09:44:26.410943 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-lib-modules\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411185 kubelet[2668]: I0913 09:44:26.411029 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62d6p\" (UniqueName: \"kubernetes.io/projected/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-kube-api-access-62d6p\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411185 kubelet[2668]: I0913 09:44:26.411077 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-cni-bin-dir\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411293 kubelet[2668]: I0913 09:44:26.411103 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-cni-log-dir\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.411293 kubelet[2668]: I0913 09:44:26.411118 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a0d0bbac-0402-4a0d-8e5f-32e57deb1596-policysync\") pod \"calico-node-t76zl\" (UID: \"a0d0bbac-0402-4a0d-8e5f-32e57deb1596\") " pod="calico-system/calico-node-t76zl" Sep 13 09:44:26.456724 systemd[1]: Started cri-containerd-f954af86220e9a895fd2d1b4596f7c135ff48f79fa3b827461353b5e893e21f2.scope - libcontainer container f954af86220e9a895fd2d1b4596f7c135ff48f79fa3b827461353b5e893e21f2. Sep 13 09:44:26.497056 containerd[1530]: time="2025-09-13T09:44:26.497014679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-676b64d6dc-t8vrx,Uid:a7f1bf8f-52cd-4961-b6b2-b3699c49944c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f954af86220e9a895fd2d1b4596f7c135ff48f79fa3b827461353b5e893e21f2\"" Sep 13 09:44:26.501588 containerd[1530]: time="2025-09-13T09:44:26.501561846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 09:44:26.521651 kubelet[2668]: E0913 09:44:26.521629 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.521792 kubelet[2668]: W0913 09:44:26.521741 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.521792 kubelet[2668]: E0913 09:44:26.521766 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.522595 kubelet[2668]: E0913 09:44:26.522579 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.522595 kubelet[2668]: W0913 09:44:26.522595 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.522661 kubelet[2668]: E0913 09:44:26.522608 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.595234 kubelet[2668]: E0913 09:44:26.595181 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcdwp" podUID="78fb1ccb-2f32-457b-8b0e-b01246bef8fc" Sep 13 09:44:26.595943 kubelet[2668]: E0913 09:44:26.595804 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.595943 kubelet[2668]: W0913 09:44:26.595824 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.595943 kubelet[2668]: E0913 09:44:26.595839 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.596381 kubelet[2668]: E0913 09:44:26.596268 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.596381 kubelet[2668]: W0913 09:44:26.596283 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.596381 kubelet[2668]: E0913 09:44:26.596324 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.597283 kubelet[2668]: E0913 09:44:26.597196 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.597283 kubelet[2668]: W0913 09:44:26.597208 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.597400 kubelet[2668]: E0913 09:44:26.597374 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.597676 kubelet[2668]: E0913 09:44:26.597603 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.597676 kubelet[2668]: W0913 09:44:26.597614 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.597676 kubelet[2668]: E0913 09:44:26.597624 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.598058 kubelet[2668]: E0913 09:44:26.597953 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.598058 kubelet[2668]: W0913 09:44:26.597970 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.598058 kubelet[2668]: E0913 09:44:26.597981 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.598309 kubelet[2668]: E0913 09:44:26.598281 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.598373 kubelet[2668]: W0913 09:44:26.598360 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.598566 kubelet[2668]: E0913 09:44:26.598398 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.598735 kubelet[2668]: E0913 09:44:26.598720 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.598822 kubelet[2668]: W0913 09:44:26.598810 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.598962 kubelet[2668]: E0913 09:44:26.598852 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.599118 kubelet[2668]: E0913 09:44:26.599105 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.599279 kubelet[2668]: W0913 09:44:26.599171 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.599279 kubelet[2668]: E0913 09:44:26.599184 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.599462 kubelet[2668]: E0913 09:44:26.599450 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.599529 kubelet[2668]: W0913 09:44:26.599518 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.599672 kubelet[2668]: E0913 09:44:26.599572 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.599830 kubelet[2668]: E0913 09:44:26.599812 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.599894 kubelet[2668]: W0913 09:44:26.599883 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.599961 kubelet[2668]: E0913 09:44:26.599950 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.600141 kubelet[2668]: E0913 09:44:26.600130 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.600210 kubelet[2668]: W0913 09:44:26.600200 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.600283 kubelet[2668]: E0913 09:44:26.600272 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.600580 kubelet[2668]: E0913 09:44:26.600485 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.600580 kubelet[2668]: W0913 09:44:26.600495 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.600580 kubelet[2668]: E0913 09:44:26.600504 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.600829 kubelet[2668]: E0913 09:44:26.600805 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.600910 kubelet[2668]: W0913 09:44:26.600873 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.600910 kubelet[2668]: E0913 09:44:26.600888 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.601154 kubelet[2668]: E0913 09:44:26.601103 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.601154 kubelet[2668]: W0913 09:44:26.601114 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.601154 kubelet[2668]: E0913 09:44:26.601123 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.605514 kubelet[2668]: E0913 09:44:26.605497 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.605686 kubelet[2668]: W0913 09:44:26.605590 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.605686 kubelet[2668]: E0913 09:44:26.605608 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.605947 kubelet[2668]: E0913 09:44:26.605933 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.606200 kubelet[2668]: W0913 09:44:26.606025 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.606200 kubelet[2668]: E0913 09:44:26.606039 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.607181 kubelet[2668]: E0913 09:44:26.607141 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.607295 kubelet[2668]: W0913 09:44:26.607280 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.607444 kubelet[2668]: E0913 09:44:26.607403 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.607967 kubelet[2668]: E0913 09:44:26.607811 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.607967 kubelet[2668]: W0913 09:44:26.607870 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.607967 kubelet[2668]: E0913 09:44:26.607882 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.612061 kubelet[2668]: E0913 09:44:26.611675 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.612061 kubelet[2668]: W0913 09:44:26.611710 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.612061 kubelet[2668]: E0913 09:44:26.611726 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.612061 kubelet[2668]: E0913 09:44:26.611950 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.612061 kubelet[2668]: W0913 09:44:26.611960 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.612061 kubelet[2668]: E0913 09:44:26.611971 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.613570 kubelet[2668]: E0913 09:44:26.613484 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.613570 kubelet[2668]: W0913 09:44:26.613500 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.613570 kubelet[2668]: E0913 09:44:26.613515 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.613570 kubelet[2668]: I0913 09:44:26.613546 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78fb1ccb-2f32-457b-8b0e-b01246bef8fc-kubelet-dir\") pod \"csi-node-driver-jcdwp\" (UID: \"78fb1ccb-2f32-457b-8b0e-b01246bef8fc\") " pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:26.613963 kubelet[2668]: E0913 09:44:26.613943 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.613963 kubelet[2668]: W0913 09:44:26.613958 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.614494 kubelet[2668]: E0913 09:44:26.613969 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.614494 kubelet[2668]: I0913 09:44:26.614031 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78fb1ccb-2f32-457b-8b0e-b01246bef8fc-socket-dir\") pod \"csi-node-driver-jcdwp\" (UID: \"78fb1ccb-2f32-457b-8b0e-b01246bef8fc\") " pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:26.614494 kubelet[2668]: E0913 09:44:26.614240 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.614494 kubelet[2668]: W0913 09:44:26.614252 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.614494 kubelet[2668]: E0913 09:44:26.614263 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.614494 kubelet[2668]: I0913 09:44:26.614284 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8cr\" (UniqueName: \"kubernetes.io/projected/78fb1ccb-2f32-457b-8b0e-b01246bef8fc-kube-api-access-fs8cr\") pod \"csi-node-driver-jcdwp\" (UID: \"78fb1ccb-2f32-457b-8b0e-b01246bef8fc\") " pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:26.614494 kubelet[2668]: E0913 09:44:26.614461 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.614494 kubelet[2668]: W0913 09:44:26.614474 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.614735 kubelet[2668]: E0913 09:44:26.614487 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.614735 kubelet[2668]: E0913 09:44:26.614673 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.614735 kubelet[2668]: W0913 09:44:26.614681 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.614735 kubelet[2668]: E0913 09:44:26.614691 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.614862 kubelet[2668]: E0913 09:44:26.614845 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.614862 kubelet[2668]: W0913 09:44:26.614857 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.614941 kubelet[2668]: E0913 09:44:26.614877 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.615019 kubelet[2668]: E0913 09:44:26.615006 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.615046 kubelet[2668]: W0913 09:44:26.615017 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.615046 kubelet[2668]: E0913 09:44:26.615032 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.615167 kubelet[2668]: E0913 09:44:26.615156 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.615189 kubelet[2668]: W0913 09:44:26.615167 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.615210 kubelet[2668]: E0913 09:44:26.615190 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.615228 kubelet[2668]: I0913 09:44:26.615209 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78fb1ccb-2f32-457b-8b0e-b01246bef8fc-registration-dir\") pod \"csi-node-driver-jcdwp\" (UID: \"78fb1ccb-2f32-457b-8b0e-b01246bef8fc\") " pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:26.615433 kubelet[2668]: E0913 09:44:26.615418 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.615462 kubelet[2668]: W0913 09:44:26.615433 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.615462 kubelet[2668]: E0913 09:44:26.615443 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.615501 kubelet[2668]: I0913 09:44:26.615465 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/78fb1ccb-2f32-457b-8b0e-b01246bef8fc-varrun\") pod \"csi-node-driver-jcdwp\" (UID: \"78fb1ccb-2f32-457b-8b0e-b01246bef8fc\") " pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:26.615664 kubelet[2668]: E0913 09:44:26.615649 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.615698 kubelet[2668]: W0913 09:44:26.615664 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.615698 kubelet[2668]: E0913 09:44:26.615675 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.615800 kubelet[2668]: E0913 09:44:26.615790 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.615800 kubelet[2668]: W0913 09:44:26.615799 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.615838 kubelet[2668]: E0913 09:44:26.615806 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.615942 kubelet[2668]: E0913 09:44:26.615931 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.615964 kubelet[2668]: W0913 09:44:26.615942 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.615964 kubelet[2668]: E0913 09:44:26.615949 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.616076 kubelet[2668]: E0913 09:44:26.616066 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.616076 kubelet[2668]: W0913 09:44:26.616075 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.616110 kubelet[2668]: E0913 09:44:26.616082 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.616222 kubelet[2668]: E0913 09:44:26.616212 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.616253 kubelet[2668]: W0913 09:44:26.616221 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.616253 kubelet[2668]: E0913 09:44:26.616229 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.616365 kubelet[2668]: E0913 09:44:26.616354 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.616365 kubelet[2668]: W0913 09:44:26.616364 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.616401 kubelet[2668]: E0913 09:44:26.616371 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.664574 containerd[1530]: time="2025-09-13T09:44:26.664308591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t76zl,Uid:a0d0bbac-0402-4a0d-8e5f-32e57deb1596,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:26.686729 containerd[1530]: time="2025-09-13T09:44:26.686679707Z" level=info msg="connecting to shim 4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada" address="unix:///run/containerd/s/810cfb80fed64bf61d44f29dfba7bbcd3e178bd873dfb10f6a8e32fc936b128d" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:26.716730 kubelet[2668]: E0913 09:44:26.716689 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.716730 kubelet[2668]: W0913 09:44:26.716714 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.716982 kubelet[2668]: E0913 09:44:26.716733 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.718784 kubelet[2668]: E0913 09:44:26.718762 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.718784 kubelet[2668]: W0913 09:44:26.718781 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.718784 kubelet[2668]: E0913 09:44:26.718795 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.719024 kubelet[2668]: E0913 09:44:26.719005 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.719024 kubelet[2668]: W0913 09:44:26.719020 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.719098 kubelet[2668]: E0913 09:44:26.719030 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.719378 kubelet[2668]: E0913 09:44:26.719360 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.719378 kubelet[2668]: W0913 09:44:26.719375 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.719466 kubelet[2668]: E0913 09:44:26.719386 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.719586 kubelet[2668]: E0913 09:44:26.719571 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.719613 kubelet[2668]: W0913 09:44:26.719586 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.719613 kubelet[2668]: E0913 09:44:26.719596 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.719752 kubelet[2668]: E0913 09:44:26.719739 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.719752 kubelet[2668]: W0913 09:44:26.719751 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.719826 kubelet[2668]: E0913 09:44:26.719761 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.719905 kubelet[2668]: E0913 09:44:26.719893 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.719951 kubelet[2668]: W0913 09:44:26.719905 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.719951 kubelet[2668]: E0913 09:44:26.719914 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.720158 kubelet[2668]: E0913 09:44:26.720132 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.720158 kubelet[2668]: W0913 09:44:26.720145 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.720158 kubelet[2668]: E0913 09:44:26.720154 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.720456 kubelet[2668]: E0913 09:44:26.720441 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.720456 kubelet[2668]: W0913 09:44:26.720456 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.720510 kubelet[2668]: E0913 09:44:26.720466 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.721963 kubelet[2668]: E0913 09:44:26.721943 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.721963 kubelet[2668]: W0913 09:44:26.721960 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.722156 kubelet[2668]: E0913 09:44:26.721974 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.722185 kubelet[2668]: E0913 09:44:26.722160 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.722185 kubelet[2668]: W0913 09:44:26.722170 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.722185 kubelet[2668]: E0913 09:44:26.722179 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.722377 kubelet[2668]: E0913 09:44:26.722363 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.722377 kubelet[2668]: W0913 09:44:26.722376 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.722432 kubelet[2668]: E0913 09:44:26.722386 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.722600 kubelet[2668]: E0913 09:44:26.722586 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.722600 kubelet[2668]: W0913 09:44:26.722598 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.722667 kubelet[2668]: E0913 09:44:26.722607 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.724100 kubelet[2668]: E0913 09:44:26.724046 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.724100 kubelet[2668]: W0913 09:44:26.724061 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.724100 kubelet[2668]: E0913 09:44:26.724073 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.724390 kubelet[2668]: E0913 09:44:26.724258 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.724390 kubelet[2668]: W0913 09:44:26.724275 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.724390 kubelet[2668]: E0913 09:44:26.724284 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.724477 kubelet[2668]: E0913 09:44:26.724444 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.724477 kubelet[2668]: W0913 09:44:26.724453 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.724477 kubelet[2668]: E0913 09:44:26.724461 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.724695 kubelet[2668]: E0913 09:44:26.724648 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.724695 kubelet[2668]: W0913 09:44:26.724661 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.724695 kubelet[2668]: E0913 09:44:26.724670 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.724964 kubelet[2668]: E0913 09:44:26.724830 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.724964 kubelet[2668]: W0913 09:44:26.724842 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.724964 kubelet[2668]: E0913 09:44:26.724851 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.725055 kubelet[2668]: E0913 09:44:26.725016 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.725055 kubelet[2668]: W0913 09:44:26.725024 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.725055 kubelet[2668]: E0913 09:44:26.725033 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.725399 kubelet[2668]: E0913 09:44:26.725340 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.725399 kubelet[2668]: W0913 09:44:26.725357 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.725399 kubelet[2668]: E0913 09:44:26.725369 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.725607 kubelet[2668]: E0913 09:44:26.725590 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.725607 kubelet[2668]: W0913 09:44:26.725604 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.725769 kubelet[2668]: E0913 09:44:26.725613 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.725794 kubelet[2668]: E0913 09:44:26.725787 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.725818 kubelet[2668]: W0913 09:44:26.725794 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.725818 kubelet[2668]: E0913 09:44:26.725803 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.726060 kubelet[2668]: E0913 09:44:26.725995 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.726060 kubelet[2668]: W0913 09:44:26.726012 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.726060 kubelet[2668]: E0913 09:44:26.726021 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.726271 kubelet[2668]: E0913 09:44:26.726252 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.726271 kubelet[2668]: W0913 09:44:26.726268 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.726337 kubelet[2668]: E0913 09:44:26.726277 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.726656 kubelet[2668]: E0913 09:44:26.726639 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.726656 kubelet[2668]: W0913 09:44:26.726654 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.726703 kubelet[2668]: E0913 09:44:26.726664 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.742867 kubelet[2668]: E0913 09:44:26.742688 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:26.742867 kubelet[2668]: W0913 09:44:26.742706 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:26.742867 kubelet[2668]: E0913 09:44:26.742729 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:26.761714 systemd[1]: Started cri-containerd-4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada.scope - libcontainer container 4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada. Sep 13 09:44:26.786534 containerd[1530]: time="2025-09-13T09:44:26.786417469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t76zl,Uid:a0d0bbac-0402-4a0d-8e5f-32e57deb1596,Namespace:calico-system,Attempt:0,} returns sandbox id \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\"" Sep 13 09:44:27.325305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531221665.mount: Deactivated successfully. Sep 13 09:44:28.399592 kubelet[2668]: E0913 09:44:28.399178 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcdwp" podUID="78fb1ccb-2f32-457b-8b0e-b01246bef8fc" Sep 13 09:44:29.088496 containerd[1530]: time="2025-09-13T09:44:29.088447845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:29.089654 containerd[1530]: time="2025-09-13T09:44:29.089630087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 13 09:44:29.090458 containerd[1530]: time="2025-09-13T09:44:29.090437688Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:29.092939 containerd[1530]: time="2025-09-13T09:44:29.092531410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:29.093722 containerd[1530]: time="2025-09-13T09:44:29.093692412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.592027325s" Sep 13 09:44:29.093802 containerd[1530]: time="2025-09-13T09:44:29.093787212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 09:44:29.096877 containerd[1530]: time="2025-09-13T09:44:29.096846136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 09:44:29.113112 containerd[1530]: time="2025-09-13T09:44:29.113058798Z" level=info msg="CreateContainer within sandbox \"f954af86220e9a895fd2d1b4596f7c135ff48f79fa3b827461353b5e893e21f2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 09:44:29.122274 containerd[1530]: time="2025-09-13T09:44:29.122217650Z" level=info msg="Container a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:29.135970 containerd[1530]: time="2025-09-13T09:44:29.135910028Z" level=info msg="CreateContainer within sandbox \"f954af86220e9a895fd2d1b4596f7c135ff48f79fa3b827461353b5e893e21f2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971\"" Sep 13 09:44:29.138706 containerd[1530]: time="2025-09-13T09:44:29.138681712Z" level=info msg="StartContainer for \"a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971\"" Sep 13 09:44:29.139792 containerd[1530]: time="2025-09-13T09:44:29.139759794Z" level=info msg="connecting to shim a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971" address="unix:///run/containerd/s/d672ce7e7f2261291ca4f5ebe9d414657b1d39f44e39760fc88746f3c9f64dcf" protocol=ttrpc version=3 Sep 13 09:44:29.158690 systemd[1]: Started cri-containerd-a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971.scope - libcontainer container a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971. Sep 13 09:44:29.193307 containerd[1530]: time="2025-09-13T09:44:29.193203985Z" level=info msg="StartContainer for \"a936f3dc17466678a0af7fedeff1a4556922d6fc4a93dbb3c38f7e1368699971\" returns successfully" Sep 13 09:44:29.529368 kubelet[2668]: E0913 09:44:29.529335 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.529368 kubelet[2668]: W0913 09:44:29.529355 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.529368 kubelet[2668]: E0913 09:44:29.529372 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.529765 kubelet[2668]: E0913 09:44:29.529489 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.529765 kubelet[2668]: W0913 09:44:29.529498 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.529765 kubelet[2668]: E0913 09:44:29.529534 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.529765 kubelet[2668]: E0913 09:44:29.529665 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.529765 kubelet[2668]: W0913 09:44:29.529672 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.529765 kubelet[2668]: E0913 09:44:29.529679 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.529873 kubelet[2668]: E0913 09:44:29.529793 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.529873 kubelet[2668]: W0913 09:44:29.529800 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.529873 kubelet[2668]: E0913 09:44:29.529806 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.529949 kubelet[2668]: E0913 09:44:29.529925 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.529949 kubelet[2668]: W0913 09:44:29.529939 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.529949 kubelet[2668]: E0913 09:44:29.529946 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530064 kubelet[2668]: E0913 09:44:29.530053 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530084 kubelet[2668]: W0913 09:44:29.530063 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530084 kubelet[2668]: E0913 09:44:29.530071 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530190 kubelet[2668]: E0913 09:44:29.530181 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530190 kubelet[2668]: W0913 09:44:29.530190 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530240 kubelet[2668]: E0913 09:44:29.530196 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530342 kubelet[2668]: E0913 09:44:29.530331 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530342 kubelet[2668]: W0913 09:44:29.530340 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530383 kubelet[2668]: E0913 09:44:29.530347 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530483 kubelet[2668]: E0913 09:44:29.530472 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530483 kubelet[2668]: W0913 09:44:29.530482 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530523 kubelet[2668]: E0913 09:44:29.530489 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530625 kubelet[2668]: E0913 09:44:29.530614 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530625 kubelet[2668]: W0913 09:44:29.530624 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530665 kubelet[2668]: E0913 09:44:29.530631 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530752 kubelet[2668]: E0913 09:44:29.530742 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530752 kubelet[2668]: W0913 09:44:29.530751 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530794 kubelet[2668]: E0913 09:44:29.530757 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.530871 kubelet[2668]: E0913 09:44:29.530862 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.530871 kubelet[2668]: W0913 09:44:29.530871 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.530907 kubelet[2668]: E0913 09:44:29.530877 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.531004 kubelet[2668]: E0913 09:44:29.530994 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.531028 kubelet[2668]: W0913 09:44:29.531006 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.531028 kubelet[2668]: E0913 09:44:29.531012 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.531136 kubelet[2668]: E0913 09:44:29.531125 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.531158 kubelet[2668]: W0913 09:44:29.531136 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.531158 kubelet[2668]: E0913 09:44:29.531143 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.531271 kubelet[2668]: E0913 09:44:29.531260 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.531271 kubelet[2668]: W0913 09:44:29.531270 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.531313 kubelet[2668]: E0913 09:44:29.531276 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.542668 kubelet[2668]: E0913 09:44:29.542630 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.542668 kubelet[2668]: W0913 09:44:29.542647 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.542668 kubelet[2668]: E0913 09:44:29.542660 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.542859 kubelet[2668]: E0913 09:44:29.542831 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.542859 kubelet[2668]: W0913 09:44:29.542844 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.542859 kubelet[2668]: E0913 09:44:29.542853 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.543049 kubelet[2668]: E0913 09:44:29.543021 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.543049 kubelet[2668]: W0913 09:44:29.543033 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.543049 kubelet[2668]: E0913 09:44:29.543041 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.543266 kubelet[2668]: E0913 09:44:29.543239 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.543266 kubelet[2668]: W0913 09:44:29.543257 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.543313 kubelet[2668]: E0913 09:44:29.543268 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.543447 kubelet[2668]: E0913 09:44:29.543427 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.543447 kubelet[2668]: W0913 09:44:29.543439 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.543447 kubelet[2668]: E0913 09:44:29.543446 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.543590 kubelet[2668]: E0913 09:44:29.543579 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.543590 kubelet[2668]: W0913 09:44:29.543588 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.543635 kubelet[2668]: E0913 09:44:29.543598 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.543751 kubelet[2668]: E0913 09:44:29.543741 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.543751 kubelet[2668]: W0913 09:44:29.543750 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.543790 kubelet[2668]: E0913 09:44:29.543758 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.544033 kubelet[2668]: E0913 09:44:29.544003 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.544033 kubelet[2668]: W0913 09:44:29.544023 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.544084 kubelet[2668]: E0913 09:44:29.544036 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.544223 kubelet[2668]: E0913 09:44:29.544211 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.544252 kubelet[2668]: W0913 09:44:29.544224 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.544252 kubelet[2668]: E0913 09:44:29.544242 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.544406 kubelet[2668]: E0913 09:44:29.544393 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.544428 kubelet[2668]: W0913 09:44:29.544405 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.544428 kubelet[2668]: E0913 09:44:29.544415 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.544607 kubelet[2668]: E0913 09:44:29.544596 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.544636 kubelet[2668]: W0913 09:44:29.544608 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.544636 kubelet[2668]: E0913 09:44:29.544616 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.544784 kubelet[2668]: E0913 09:44:29.544771 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.544784 kubelet[2668]: W0913 09:44:29.544783 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.544825 kubelet[2668]: E0913 09:44:29.544791 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.544961 kubelet[2668]: E0913 09:44:29.544949 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.544981 kubelet[2668]: W0913 09:44:29.544962 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.544981 kubelet[2668]: E0913 09:44:29.544970 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.545398 kubelet[2668]: E0913 09:44:29.545368 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.545398 kubelet[2668]: W0913 09:44:29.545385 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.545398 kubelet[2668]: E0913 09:44:29.545396 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.545554 kubelet[2668]: E0913 09:44:29.545538 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.545576 kubelet[2668]: W0913 09:44:29.545569 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.545596 kubelet[2668]: E0913 09:44:29.545578 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.545761 kubelet[2668]: E0913 09:44:29.545749 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.545790 kubelet[2668]: W0913 09:44:29.545761 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.545790 kubelet[2668]: E0913 09:44:29.545768 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.546057 kubelet[2668]: E0913 09:44:29.546045 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.546080 kubelet[2668]: W0913 09:44:29.546057 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.546080 kubelet[2668]: E0913 09:44:29.546066 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.546221 kubelet[2668]: E0913 09:44:29.546209 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 09:44:29.546251 kubelet[2668]: W0913 09:44:29.546220 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 09:44:29.546251 kubelet[2668]: E0913 09:44:29.546238 2668 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 09:44:29.904322 containerd[1530]: time="2025-09-13T09:44:29.904212097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:29.904976 containerd[1530]: time="2025-09-13T09:44:29.904952458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 13 09:44:29.907185 containerd[1530]: time="2025-09-13T09:44:29.907138821Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:29.909614 containerd[1530]: time="2025-09-13T09:44:29.909590784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:29.910130 containerd[1530]: time="2025-09-13T09:44:29.910086665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 813.206369ms" Sep 13 09:44:29.910162 containerd[1530]: time="2025-09-13T09:44:29.910134225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 09:44:29.913672 containerd[1530]: time="2025-09-13T09:44:29.913641629Z" level=info msg="CreateContainer within sandbox \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 09:44:29.920025 containerd[1530]: time="2025-09-13T09:44:29.919993598Z" level=info msg="Container 6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:29.933889 containerd[1530]: time="2025-09-13T09:44:29.933852056Z" level=info msg="CreateContainer within sandbox \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\"" Sep 13 09:44:29.934235 containerd[1530]: time="2025-09-13T09:44:29.934201017Z" level=info msg="StartContainer for \"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\"" Sep 13 09:44:29.935781 containerd[1530]: time="2025-09-13T09:44:29.935753099Z" level=info msg="connecting to shim 6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f" address="unix:///run/containerd/s/810cfb80fed64bf61d44f29dfba7bbcd3e178bd873dfb10f6a8e32fc936b128d" protocol=ttrpc version=3 Sep 13 09:44:29.957773 systemd[1]: Started cri-containerd-6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f.scope - libcontainer container 6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f. Sep 13 09:44:29.991734 containerd[1530]: time="2025-09-13T09:44:29.991691374Z" level=info msg="StartContainer for \"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\" returns successfully" Sep 13 09:44:30.002936 systemd[1]: cri-containerd-6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f.scope: Deactivated successfully. Sep 13 09:44:30.003214 systemd[1]: cri-containerd-6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f.scope: Consumed 29ms CPU time, 6.1M memory peak, 4.5M written to disk. Sep 13 09:44:30.011648 containerd[1530]: time="2025-09-13T09:44:30.011597920Z" level=info msg="received exit event container_id:\"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\" id:\"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\" pid:3371 exited_at:{seconds:1757756670 nanos:5246192}" Sep 13 09:44:30.011766 containerd[1530]: time="2025-09-13T09:44:30.011682120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\" id:\"6fbf9868388af873c0fbdc50653a1512861289bc2fe34d615b98790914d45f1f\" pid:3371 exited_at:{seconds:1757756670 nanos:5246192}" Sep 13 09:44:30.399270 kubelet[2668]: E0913 09:44:30.399202 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcdwp" podUID="78fb1ccb-2f32-457b-8b0e-b01246bef8fc" Sep 13 09:44:30.487144 containerd[1530]: time="2025-09-13T09:44:30.487082876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 09:44:30.489382 kubelet[2668]: I0913 09:44:30.489351 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 09:44:30.503178 kubelet[2668]: I0913 09:44:30.503119 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-676b64d6dc-t8vrx" podStartSLOduration=1.904678001 podStartE2EDuration="4.503103536s" podCreationTimestamp="2025-09-13 09:44:26 +0000 UTC" firstStartedPulling="2025-09-13 09:44:26.498282041 +0000 UTC m=+19.185913005" lastFinishedPulling="2025-09-13 09:44:29.096707536 +0000 UTC m=+21.784338540" observedRunningTime="2025-09-13 09:44:29.492859266 +0000 UTC m=+22.180490230" watchObservedRunningTime="2025-09-13 09:44:30.503103536 +0000 UTC m=+23.190734540" Sep 13 09:44:32.398914 kubelet[2668]: E0913 09:44:32.398871 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jcdwp" podUID="78fb1ccb-2f32-457b-8b0e-b01246bef8fc" Sep 13 09:44:32.948175 containerd[1530]: time="2025-09-13T09:44:32.948125982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:32.948784 containerd[1530]: time="2025-09-13T09:44:32.948746622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 09:44:32.949597 containerd[1530]: time="2025-09-13T09:44:32.949571983Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:32.951526 containerd[1530]: time="2025-09-13T09:44:32.951495826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:32.952132 containerd[1530]: time="2025-09-13T09:44:32.952051786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.46492815s" Sep 13 09:44:32.952177 containerd[1530]: time="2025-09-13T09:44:32.952137386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 09:44:32.956001 containerd[1530]: time="2025-09-13T09:44:32.955964750Z" level=info msg="CreateContainer within sandbox \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 09:44:32.964558 containerd[1530]: time="2025-09-13T09:44:32.964512720Z" level=info msg="Container da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:32.971666 containerd[1530]: time="2025-09-13T09:44:32.971636088Z" level=info msg="CreateContainer within sandbox \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\"" Sep 13 09:44:32.972428 containerd[1530]: time="2025-09-13T09:44:32.972263408Z" level=info msg="StartContainer for \"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\"" Sep 13 09:44:32.973648 containerd[1530]: time="2025-09-13T09:44:32.973605930Z" level=info msg="connecting to shim da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19" address="unix:///run/containerd/s/810cfb80fed64bf61d44f29dfba7bbcd3e178bd873dfb10f6a8e32fc936b128d" protocol=ttrpc version=3 Sep 13 09:44:32.998709 systemd[1]: Started cri-containerd-da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19.scope - libcontainer container da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19. Sep 13 09:44:33.032295 containerd[1530]: time="2025-09-13T09:44:33.032253792Z" level=info msg="StartContainer for \"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\" returns successfully" Sep 13 09:44:33.665694 systemd[1]: cri-containerd-da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19.scope: Deactivated successfully. Sep 13 09:44:33.666025 systemd[1]: cri-containerd-da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19.scope: Consumed 447ms CPU time, 177.9M memory peak, 3M read from disk, 165.8M written to disk. Sep 13 09:44:33.668525 containerd[1530]: time="2025-09-13T09:44:33.668487570Z" level=info msg="received exit event container_id:\"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\" id:\"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\" pid:3431 exited_at:{seconds:1757756673 nanos:668302810}" Sep 13 09:44:33.668786 containerd[1530]: time="2025-09-13T09:44:33.668758250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\" id:\"da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19\" pid:3431 exited_at:{seconds:1757756673 nanos:668302810}" Sep 13 09:44:33.687525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0d71ffafe670bd9914c4cdd019fcc7b3360a002d5f6c276eb107504b1e1a19-rootfs.mount: Deactivated successfully. Sep 13 09:44:33.733585 kubelet[2668]: I0913 09:44:33.733104 2668 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 09:44:33.783908 systemd[1]: Created slice kubepods-besteffort-pod2adee010_a209_4bb0_8d7c_308d0cc347bd.slice - libcontainer container kubepods-besteffort-pod2adee010_a209_4bb0_8d7c_308d0cc347bd.slice. Sep 13 09:44:33.791641 systemd[1]: Created slice kubepods-besteffort-pod2f76ad57_fd01_4555_89ee_97cc6fb621a4.slice - libcontainer container kubepods-besteffort-pod2f76ad57_fd01_4555_89ee_97cc6fb621a4.slice. Sep 13 09:44:33.801087 systemd[1]: Created slice kubepods-burstable-pod99d74fae_09db_4ec5_b5b7_6da4c98bfcd0.slice - libcontainer container kubepods-burstable-pod99d74fae_09db_4ec5_b5b7_6da4c98bfcd0.slice. Sep 13 09:44:33.807440 systemd[1]: Created slice kubepods-besteffort-pode5aefee4_e7ee_4daf_b20d_499e2884454d.slice - libcontainer container kubepods-besteffort-pode5aefee4_e7ee_4daf_b20d_499e2884454d.slice. Sep 13 09:44:33.814578 systemd[1]: Created slice kubepods-besteffort-pod96591e0b_8d86_4397_a1b1_01479c0f2d93.slice - libcontainer container kubepods-besteffort-pod96591e0b_8d86_4397_a1b1_01479c0f2d93.slice. Sep 13 09:44:33.820412 systemd[1]: Created slice kubepods-burstable-podded51c04_df9a_49a6_9f3d_2646f74ffca6.slice - libcontainer container kubepods-burstable-podded51c04_df9a_49a6_9f3d_2646f74ffca6.slice. Sep 13 09:44:33.826454 systemd[1]: Created slice kubepods-besteffort-podb8b03254_6243_448c_a760_f348a82440e4.slice - libcontainer container kubepods-besteffort-podb8b03254_6243_448c_a760_f348a82440e4.slice. Sep 13 09:44:33.876377 kubelet[2668]: I0913 09:44:33.876330 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b03254-6243-448c-a760-f348a82440e4-goldmane-key-pair\") pod \"goldmane-54d579b49d-4ttmc\" (UID: \"b8b03254-6243-448c-a760-f348a82440e4\") " pod="calico-system/goldmane-54d579b49d-4ttmc" Sep 13 09:44:33.876533 kubelet[2668]: I0913 09:44:33.876398 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f76ad57-fd01-4555-89ee-97cc6fb621a4-calico-apiserver-certs\") pod \"calico-apiserver-54677bd554-6662p\" (UID: \"2f76ad57-fd01-4555-89ee-97cc6fb621a4\") " pod="calico-apiserver/calico-apiserver-54677bd554-6662p" Sep 13 09:44:33.876533 kubelet[2668]: I0913 09:44:33.876419 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkptj\" (UniqueName: \"kubernetes.io/projected/99d74fae-09db-4ec5-b5b7-6da4c98bfcd0-kube-api-access-gkptj\") pod \"coredns-674b8bbfcf-b2xmw\" (UID: \"99d74fae-09db-4ec5-b5b7-6da4c98bfcd0\") " pod="kube-system/coredns-674b8bbfcf-b2xmw" Sep 13 09:44:33.876533 kubelet[2668]: I0913 09:44:33.876436 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqgb\" (UniqueName: \"kubernetes.io/projected/e5aefee4-e7ee-4daf-b20d-499e2884454d-kube-api-access-7fqgb\") pod \"calico-apiserver-54677bd554-psmpc\" (UID: \"e5aefee4-e7ee-4daf-b20d-499e2884454d\") " pod="calico-apiserver/calico-apiserver-54677bd554-psmpc" Sep 13 09:44:33.876533 kubelet[2668]: I0913 09:44:33.876470 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-ca-bundle\") pod \"whisker-6f548dc87d-qwkx7\" (UID: \"2adee010-a209-4bb0-8d7c-308d0cc347bd\") " pod="calico-system/whisker-6f548dc87d-qwkx7" Sep 13 09:44:33.876533 kubelet[2668]: I0913 09:44:33.876487 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjw8p\" (UniqueName: \"kubernetes.io/projected/2f76ad57-fd01-4555-89ee-97cc6fb621a4-kube-api-access-pjw8p\") pod \"calico-apiserver-54677bd554-6662p\" (UID: \"2f76ad57-fd01-4555-89ee-97cc6fb621a4\") " pod="calico-apiserver/calico-apiserver-54677bd554-6662p" Sep 13 09:44:33.876856 kubelet[2668]: I0913 09:44:33.876509 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-backend-key-pair\") pod \"whisker-6f548dc87d-qwkx7\" (UID: \"2adee010-a209-4bb0-8d7c-308d0cc347bd\") " pod="calico-system/whisker-6f548dc87d-qwkx7" Sep 13 09:44:33.876856 kubelet[2668]: I0913 09:44:33.876525 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96591e0b-8d86-4397-a1b1-01479c0f2d93-tigera-ca-bundle\") pod \"calico-kube-controllers-55c649555f-mb6wd\" (UID: \"96591e0b-8d86-4397-a1b1-01479c0f2d93\") " pod="calico-system/calico-kube-controllers-55c649555f-mb6wd" Sep 13 09:44:33.876856 kubelet[2668]: I0913 09:44:33.876540 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b03254-6243-448c-a760-f348a82440e4-config\") pod \"goldmane-54d579b49d-4ttmc\" (UID: \"b8b03254-6243-448c-a760-f348a82440e4\") " pod="calico-system/goldmane-54d579b49d-4ttmc" Sep 13 09:44:33.876856 kubelet[2668]: I0913 09:44:33.876583 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b03254-6243-448c-a760-f348a82440e4-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-4ttmc\" (UID: \"b8b03254-6243-448c-a760-f348a82440e4\") " pod="calico-system/goldmane-54d579b49d-4ttmc" Sep 13 09:44:33.876856 kubelet[2668]: I0913 09:44:33.876603 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ded51c04-df9a-49a6-9f3d-2646f74ffca6-config-volume\") pod \"coredns-674b8bbfcf-hs6cj\" (UID: \"ded51c04-df9a-49a6-9f3d-2646f74ffca6\") " pod="kube-system/coredns-674b8bbfcf-hs6cj" Sep 13 09:44:33.876980 kubelet[2668]: I0913 09:44:33.876683 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2ts\" (UniqueName: \"kubernetes.io/projected/b8b03254-6243-448c-a760-f348a82440e4-kube-api-access-6x2ts\") pod \"goldmane-54d579b49d-4ttmc\" (UID: \"b8b03254-6243-448c-a760-f348a82440e4\") " pod="calico-system/goldmane-54d579b49d-4ttmc" Sep 13 09:44:33.876980 kubelet[2668]: I0913 09:44:33.876699 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99d74fae-09db-4ec5-b5b7-6da4c98bfcd0-config-volume\") pod \"coredns-674b8bbfcf-b2xmw\" (UID: \"99d74fae-09db-4ec5-b5b7-6da4c98bfcd0\") " pod="kube-system/coredns-674b8bbfcf-b2xmw" Sep 13 09:44:33.876980 kubelet[2668]: I0913 09:44:33.876741 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7lc7\" (UniqueName: \"kubernetes.io/projected/2adee010-a209-4bb0-8d7c-308d0cc347bd-kube-api-access-h7lc7\") pod \"whisker-6f548dc87d-qwkx7\" (UID: \"2adee010-a209-4bb0-8d7c-308d0cc347bd\") " pod="calico-system/whisker-6f548dc87d-qwkx7" Sep 13 09:44:33.876980 kubelet[2668]: I0913 09:44:33.876762 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5aefee4-e7ee-4daf-b20d-499e2884454d-calico-apiserver-certs\") pod \"calico-apiserver-54677bd554-psmpc\" (UID: \"e5aefee4-e7ee-4daf-b20d-499e2884454d\") " pod="calico-apiserver/calico-apiserver-54677bd554-psmpc" Sep 13 09:44:33.876980 kubelet[2668]: I0913 09:44:33.876784 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-484d8\" (UniqueName: \"kubernetes.io/projected/ded51c04-df9a-49a6-9f3d-2646f74ffca6-kube-api-access-484d8\") pod \"coredns-674b8bbfcf-hs6cj\" (UID: \"ded51c04-df9a-49a6-9f3d-2646f74ffca6\") " pod="kube-system/coredns-674b8bbfcf-hs6cj" Sep 13 09:44:33.877080 kubelet[2668]: I0913 09:44:33.876800 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp46q\" (UniqueName: \"kubernetes.io/projected/96591e0b-8d86-4397-a1b1-01479c0f2d93-kube-api-access-cp46q\") pod \"calico-kube-controllers-55c649555f-mb6wd\" (UID: \"96591e0b-8d86-4397-a1b1-01479c0f2d93\") " pod="calico-system/calico-kube-controllers-55c649555f-mb6wd" Sep 13 09:44:34.090545 containerd[1530]: time="2025-09-13T09:44:34.090502361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f548dc87d-qwkx7,Uid:2adee010-a209-4bb0-8d7c-308d0cc347bd,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:34.097459 containerd[1530]: time="2025-09-13T09:44:34.097189767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-6662p,Uid:2f76ad57-fd01-4555-89ee-97cc6fb621a4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 09:44:34.105601 containerd[1530]: time="2025-09-13T09:44:34.105537495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2xmw,Uid:99d74fae-09db-4ec5-b5b7-6da4c98bfcd0,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:34.109987 containerd[1530]: time="2025-09-13T09:44:34.109882539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-psmpc,Uid:e5aefee4-e7ee-4daf-b20d-499e2884454d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 09:44:34.119016 containerd[1530]: time="2025-09-13T09:44:34.118930468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c649555f-mb6wd,Uid:96591e0b-8d86-4397-a1b1-01479c0f2d93,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:34.123894 containerd[1530]: time="2025-09-13T09:44:34.123754513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs6cj,Uid:ded51c04-df9a-49a6-9f3d-2646f74ffca6,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:34.131235 containerd[1530]: time="2025-09-13T09:44:34.131189240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4ttmc,Uid:b8b03254-6243-448c-a760-f348a82440e4,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:34.202807 containerd[1530]: time="2025-09-13T09:44:34.202748229Z" level=error msg="Failed to destroy network for sandbox \"4db4e0b4a26b245b8e15ab358cf462ba7d1383f8ca3f6689551534b1ac552b4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.205409 containerd[1530]: time="2025-09-13T09:44:34.205363152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f548dc87d-qwkx7,Uid:2adee010-a209-4bb0-8d7c-308d0cc347bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db4e0b4a26b245b8e15ab358cf462ba7d1383f8ca3f6689551534b1ac552b4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.208004 kubelet[2668]: E0913 09:44:34.207909 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db4e0b4a26b245b8e15ab358cf462ba7d1383f8ca3f6689551534b1ac552b4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.208102 kubelet[2668]: E0913 09:44:34.208015 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db4e0b4a26b245b8e15ab358cf462ba7d1383f8ca3f6689551534b1ac552b4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f548dc87d-qwkx7" Sep 13 09:44:34.211840 kubelet[2668]: E0913 09:44:34.211785 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db4e0b4a26b245b8e15ab358cf462ba7d1383f8ca3f6689551534b1ac552b4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f548dc87d-qwkx7" Sep 13 09:44:34.211927 kubelet[2668]: E0913 09:44:34.211897 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f548dc87d-qwkx7_calico-system(2adee010-a209-4bb0-8d7c-308d0cc347bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f548dc87d-qwkx7_calico-system(2adee010-a209-4bb0-8d7c-308d0cc347bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4db4e0b4a26b245b8e15ab358cf462ba7d1383f8ca3f6689551534b1ac552b4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f548dc87d-qwkx7" podUID="2adee010-a209-4bb0-8d7c-308d0cc347bd" Sep 13 09:44:34.213943 containerd[1530]: time="2025-09-13T09:44:34.211255878Z" level=error msg="Failed to destroy network for sandbox \"11ef1184295e2706ddf0503b93651e12960827d9436f00ba35237155eb87514b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.217647 containerd[1530]: time="2025-09-13T09:44:34.217601044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c649555f-mb6wd,Uid:96591e0b-8d86-4397-a1b1-01479c0f2d93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ef1184295e2706ddf0503b93651e12960827d9436f00ba35237155eb87514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.218096 kubelet[2668]: E0913 09:44:34.217826 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ef1184295e2706ddf0503b93651e12960827d9436f00ba35237155eb87514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.218096 kubelet[2668]: E0913 09:44:34.217896 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ef1184295e2706ddf0503b93651e12960827d9436f00ba35237155eb87514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c649555f-mb6wd" Sep 13 09:44:34.218096 kubelet[2668]: E0913 09:44:34.217915 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ef1184295e2706ddf0503b93651e12960827d9436f00ba35237155eb87514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c649555f-mb6wd" Sep 13 09:44:34.218366 kubelet[2668]: E0913 09:44:34.217978 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55c649555f-mb6wd_calico-system(96591e0b-8d86-4397-a1b1-01479c0f2d93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55c649555f-mb6wd_calico-system(96591e0b-8d86-4397-a1b1-01479c0f2d93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11ef1184295e2706ddf0503b93651e12960827d9436f00ba35237155eb87514b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c649555f-mb6wd" podUID="96591e0b-8d86-4397-a1b1-01479c0f2d93" Sep 13 09:44:34.219125 containerd[1530]: time="2025-09-13T09:44:34.219060245Z" level=error msg="Failed to destroy network for sandbox \"9a01df4a555796790f5829772083c8562f775e8b104ac7d0d31c38228e8947bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.224083 containerd[1530]: time="2025-09-13T09:44:34.223886530Z" level=error msg="Failed to destroy network for sandbox \"fbad31cbd981274bd10bd953a36b35a8aae8acc3130bce6223a33c1ef338676f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.226571 containerd[1530]: time="2025-09-13T09:44:34.226503932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2xmw,Uid:99d74fae-09db-4ec5-b5b7-6da4c98bfcd0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a01df4a555796790f5829772083c8562f775e8b104ac7d0d31c38228e8947bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.226804 kubelet[2668]: E0913 09:44:34.226736 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a01df4a555796790f5829772083c8562f775e8b104ac7d0d31c38228e8947bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.226804 kubelet[2668]: E0913 09:44:34.226786 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a01df4a555796790f5829772083c8562f775e8b104ac7d0d31c38228e8947bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b2xmw" Sep 13 09:44:34.226893 kubelet[2668]: E0913 09:44:34.226815 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a01df4a555796790f5829772083c8562f775e8b104ac7d0d31c38228e8947bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b2xmw" Sep 13 09:44:34.226893 kubelet[2668]: E0913 09:44:34.226858 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b2xmw_kube-system(99d74fae-09db-4ec5-b5b7-6da4c98bfcd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b2xmw_kube-system(99d74fae-09db-4ec5-b5b7-6da4c98bfcd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a01df4a555796790f5829772083c8562f775e8b104ac7d0d31c38228e8947bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b2xmw" podUID="99d74fae-09db-4ec5-b5b7-6da4c98bfcd0" Sep 13 09:44:34.228400 containerd[1530]: time="2025-09-13T09:44:34.228370054Z" level=error msg="Failed to destroy network for sandbox \"f2bb1e2b3ae71a6a05871c694e98021581661d26dafc0cecd2ce12849b46f332\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.228714 containerd[1530]: time="2025-09-13T09:44:34.228684255Z" level=error msg="Failed to destroy network for sandbox \"c6da486306f60af5476a4e48c419c9426d1ed140f6a8531e189a0ab481591d23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.230088 containerd[1530]: time="2025-09-13T09:44:34.230053576Z" level=error msg="Failed to destroy network for sandbox \"644de02bbee923b496dc180ac24a037d1fbbb37d556871b9d4fe5595a4ac03f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.237144 containerd[1530]: time="2025-09-13T09:44:34.237091343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs6cj,Uid:ded51c04-df9a-49a6-9f3d-2646f74ffca6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbad31cbd981274bd10bd953a36b35a8aae8acc3130bce6223a33c1ef338676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.237360 kubelet[2668]: E0913 09:44:34.237316 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbad31cbd981274bd10bd953a36b35a8aae8acc3130bce6223a33c1ef338676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.237424 kubelet[2668]: E0913 09:44:34.237376 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbad31cbd981274bd10bd953a36b35a8aae8acc3130bce6223a33c1ef338676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hs6cj" Sep 13 09:44:34.237424 kubelet[2668]: E0913 09:44:34.237398 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbad31cbd981274bd10bd953a36b35a8aae8acc3130bce6223a33c1ef338676f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hs6cj" Sep 13 09:44:34.237483 kubelet[2668]: E0913 09:44:34.237458 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hs6cj_kube-system(ded51c04-df9a-49a6-9f3d-2646f74ffca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hs6cj_kube-system(ded51c04-df9a-49a6-9f3d-2646f74ffca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbad31cbd981274bd10bd953a36b35a8aae8acc3130bce6223a33c1ef338676f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hs6cj" podUID="ded51c04-df9a-49a6-9f3d-2646f74ffca6" Sep 13 09:44:34.248778 containerd[1530]: time="2025-09-13T09:44:34.248673034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-psmpc,Uid:e5aefee4-e7ee-4daf-b20d-499e2884454d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bb1e2b3ae71a6a05871c694e98021581661d26dafc0cecd2ce12849b46f332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.249014 kubelet[2668]: E0913 09:44:34.248949 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bb1e2b3ae71a6a05871c694e98021581661d26dafc0cecd2ce12849b46f332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.249073 kubelet[2668]: E0913 09:44:34.249012 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bb1e2b3ae71a6a05871c694e98021581661d26dafc0cecd2ce12849b46f332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54677bd554-psmpc" Sep 13 09:44:34.249073 kubelet[2668]: E0913 09:44:34.249032 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2bb1e2b3ae71a6a05871c694e98021581661d26dafc0cecd2ce12849b46f332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54677bd554-psmpc" Sep 13 09:44:34.249114 kubelet[2668]: E0913 09:44:34.249073 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54677bd554-psmpc_calico-apiserver(e5aefee4-e7ee-4daf-b20d-499e2884454d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54677bd554-psmpc_calico-apiserver(e5aefee4-e7ee-4daf-b20d-499e2884454d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2bb1e2b3ae71a6a05871c694e98021581661d26dafc0cecd2ce12849b46f332\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54677bd554-psmpc" podUID="e5aefee4-e7ee-4daf-b20d-499e2884454d" Sep 13 09:44:34.261256 containerd[1530]: time="2025-09-13T09:44:34.261159446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4ttmc,Uid:b8b03254-6243-448c-a760-f348a82440e4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6da486306f60af5476a4e48c419c9426d1ed140f6a8531e189a0ab481591d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.261366 kubelet[2668]: E0913 09:44:34.261333 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6da486306f60af5476a4e48c419c9426d1ed140f6a8531e189a0ab481591d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.261402 kubelet[2668]: E0913 09:44:34.261374 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6da486306f60af5476a4e48c419c9426d1ed140f6a8531e189a0ab481591d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-4ttmc" Sep 13 09:44:34.261402 kubelet[2668]: E0913 09:44:34.261391 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6da486306f60af5476a4e48c419c9426d1ed140f6a8531e189a0ab481591d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-4ttmc" Sep 13 09:44:34.261460 kubelet[2668]: E0913 09:44:34.261430 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-4ttmc_calico-system(b8b03254-6243-448c-a760-f348a82440e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-4ttmc_calico-system(b8b03254-6243-448c-a760-f348a82440e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6da486306f60af5476a4e48c419c9426d1ed140f6a8531e189a0ab481591d23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-4ttmc" podUID="b8b03254-6243-448c-a760-f348a82440e4" Sep 13 09:44:34.261941 containerd[1530]: time="2025-09-13T09:44:34.261893047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-6662p,Uid:2f76ad57-fd01-4555-89ee-97cc6fb621a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"644de02bbee923b496dc180ac24a037d1fbbb37d556871b9d4fe5595a4ac03f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.262093 kubelet[2668]: E0913 09:44:34.262065 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644de02bbee923b496dc180ac24a037d1fbbb37d556871b9d4fe5595a4ac03f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.262132 kubelet[2668]: E0913 09:44:34.262105 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644de02bbee923b496dc180ac24a037d1fbbb37d556871b9d4fe5595a4ac03f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54677bd554-6662p" Sep 13 09:44:34.262132 kubelet[2668]: E0913 09:44:34.262122 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644de02bbee923b496dc180ac24a037d1fbbb37d556871b9d4fe5595a4ac03f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54677bd554-6662p" Sep 13 09:44:34.262193 kubelet[2668]: E0913 09:44:34.262157 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54677bd554-6662p_calico-apiserver(2f76ad57-fd01-4555-89ee-97cc6fb621a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54677bd554-6662p_calico-apiserver(2f76ad57-fd01-4555-89ee-97cc6fb621a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"644de02bbee923b496dc180ac24a037d1fbbb37d556871b9d4fe5595a4ac03f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54677bd554-6662p" podUID="2f76ad57-fd01-4555-89ee-97cc6fb621a4" Sep 13 09:44:34.406087 systemd[1]: Created slice kubepods-besteffort-pod78fb1ccb_2f32_457b_8b0e_b01246bef8fc.slice - libcontainer container kubepods-besteffort-pod78fb1ccb_2f32_457b_8b0e_b01246bef8fc.slice. Sep 13 09:44:34.409180 containerd[1530]: time="2025-09-13T09:44:34.409144110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcdwp,Uid:78fb1ccb-2f32-457b-8b0e-b01246bef8fc,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:34.453719 containerd[1530]: time="2025-09-13T09:44:34.453674833Z" level=error msg="Failed to destroy network for sandbox \"7f92dca2c742504bd90c7cc54f6ab10aafe51a25f7b3855d2a3ba025bc570eec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.454821 containerd[1530]: time="2025-09-13T09:44:34.454788074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcdwp,Uid:78fb1ccb-2f32-457b-8b0e-b01246bef8fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f92dca2c742504bd90c7cc54f6ab10aafe51a25f7b3855d2a3ba025bc570eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.455426 kubelet[2668]: E0913 09:44:34.455029 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f92dca2c742504bd90c7cc54f6ab10aafe51a25f7b3855d2a3ba025bc570eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 09:44:34.455426 kubelet[2668]: E0913 09:44:34.455096 2668 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f92dca2c742504bd90c7cc54f6ab10aafe51a25f7b3855d2a3ba025bc570eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:34.455426 kubelet[2668]: E0913 09:44:34.455116 2668 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f92dca2c742504bd90c7cc54f6ab10aafe51a25f7b3855d2a3ba025bc570eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jcdwp" Sep 13 09:44:34.456695 kubelet[2668]: E0913 09:44:34.455167 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jcdwp_calico-system(78fb1ccb-2f32-457b-8b0e-b01246bef8fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jcdwp_calico-system(78fb1ccb-2f32-457b-8b0e-b01246bef8fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f92dca2c742504bd90c7cc54f6ab10aafe51a25f7b3855d2a3ba025bc570eec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jcdwp" podUID="78fb1ccb-2f32-457b-8b0e-b01246bef8fc" Sep 13 09:44:34.501510 containerd[1530]: time="2025-09-13T09:44:34.501075239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 09:44:37.007196 kubelet[2668]: I0913 09:44:37.007128 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 09:44:37.361466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425070886.mount: Deactivated successfully. Sep 13 09:44:37.601661 containerd[1530]: time="2025-09-13T09:44:37.601626363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:37.602419 containerd[1530]: time="2025-09-13T09:44:37.602395684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 09:44:37.604283 containerd[1530]: time="2025-09-13T09:44:37.604252125Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:37.605253 containerd[1530]: time="2025-09-13T09:44:37.605224006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:37.606233 containerd[1530]: time="2025-09-13T09:44:37.605949327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.104835928s" Sep 13 09:44:37.606233 containerd[1530]: time="2025-09-13T09:44:37.606128727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 09:44:37.618137 containerd[1530]: time="2025-09-13T09:44:37.618062216Z" level=info msg="CreateContainer within sandbox \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 09:44:37.628590 containerd[1530]: time="2025-09-13T09:44:37.628501385Z" level=info msg="Container ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:37.637621 containerd[1530]: time="2025-09-13T09:44:37.637574272Z" level=info msg="CreateContainer within sandbox \"4878c42574aade54eb2802d8cd99d3d76c5162398a34d8eddd013bd75853aada\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16\"" Sep 13 09:44:37.638276 containerd[1530]: time="2025-09-13T09:44:37.638239673Z" level=info msg="StartContainer for \"ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16\"" Sep 13 09:44:37.639898 containerd[1530]: time="2025-09-13T09:44:37.639848434Z" level=info msg="connecting to shim ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16" address="unix:///run/containerd/s/810cfb80fed64bf61d44f29dfba7bbcd3e178bd873dfb10f6a8e32fc936b128d" protocol=ttrpc version=3 Sep 13 09:44:37.662710 systemd[1]: Started cri-containerd-ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16.scope - libcontainer container ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16. Sep 13 09:44:37.719451 containerd[1530]: time="2025-09-13T09:44:37.719415897Z" level=info msg="StartContainer for \"ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16\" returns successfully" Sep 13 09:44:37.838277 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 09:44:37.838433 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 09:44:38.014154 kubelet[2668]: I0913 09:44:38.014059 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7lc7\" (UniqueName: \"kubernetes.io/projected/2adee010-a209-4bb0-8d7c-308d0cc347bd-kube-api-access-h7lc7\") pod \"2adee010-a209-4bb0-8d7c-308d0cc347bd\" (UID: \"2adee010-a209-4bb0-8d7c-308d0cc347bd\") " Sep 13 09:44:38.014154 kubelet[2668]: I0913 09:44:38.014118 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-ca-bundle\") pod \"2adee010-a209-4bb0-8d7c-308d0cc347bd\" (UID: \"2adee010-a209-4bb0-8d7c-308d0cc347bd\") " Sep 13 09:44:38.014154 kubelet[2668]: I0913 09:44:38.014147 2668 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-backend-key-pair\") pod \"2adee010-a209-4bb0-8d7c-308d0cc347bd\" (UID: \"2adee010-a209-4bb0-8d7c-308d0cc347bd\") " Sep 13 09:44:38.028462 kubelet[2668]: I0913 09:44:38.028413 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adee010-a209-4bb0-8d7c-308d0cc347bd-kube-api-access-h7lc7" (OuterVolumeSpecName: "kube-api-access-h7lc7") pod "2adee010-a209-4bb0-8d7c-308d0cc347bd" (UID: "2adee010-a209-4bb0-8d7c-308d0cc347bd"). InnerVolumeSpecName "kube-api-access-h7lc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 09:44:38.030035 kubelet[2668]: I0913 09:44:38.028917 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2adee010-a209-4bb0-8d7c-308d0cc347bd" (UID: "2adee010-a209-4bb0-8d7c-308d0cc347bd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 09:44:38.032410 kubelet[2668]: I0913 09:44:38.032349 2668 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2adee010-a209-4bb0-8d7c-308d0cc347bd" (UID: "2adee010-a209-4bb0-8d7c-308d0cc347bd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 09:44:38.114969 kubelet[2668]: I0913 09:44:38.114926 2668 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7lc7\" (UniqueName: \"kubernetes.io/projected/2adee010-a209-4bb0-8d7c-308d0cc347bd-kube-api-access-h7lc7\") on node \"localhost\" DevicePath \"\"" Sep 13 09:44:38.114969 kubelet[2668]: I0913 09:44:38.114958 2668 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 09:44:38.114969 kubelet[2668]: I0913 09:44:38.114968 2668 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2adee010-a209-4bb0-8d7c-308d0cc347bd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 09:44:38.364966 systemd[1]: var-lib-kubelet-pods-2adee010\x2da209\x2d4bb0\x2d8d7c\x2d308d0cc347bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7lc7.mount: Deactivated successfully. Sep 13 09:44:38.365072 systemd[1]: var-lib-kubelet-pods-2adee010\x2da209\x2d4bb0\x2d8d7c\x2d308d0cc347bd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 09:44:38.529044 systemd[1]: Removed slice kubepods-besteffort-pod2adee010_a209_4bb0_8d7c_308d0cc347bd.slice - libcontainer container kubepods-besteffort-pod2adee010_a209_4bb0_8d7c_308d0cc347bd.slice. Sep 13 09:44:38.555610 kubelet[2668]: I0913 09:44:38.555531 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t76zl" podStartSLOduration=1.737402083 podStartE2EDuration="12.555515217s" podCreationTimestamp="2025-09-13 09:44:26 +0000 UTC" firstStartedPulling="2025-09-13 09:44:26.788523313 +0000 UTC m=+19.476154317" lastFinishedPulling="2025-09-13 09:44:37.606636447 +0000 UTC m=+30.294267451" observedRunningTime="2025-09-13 09:44:38.54515481 +0000 UTC m=+31.232785814" watchObservedRunningTime="2025-09-13 09:44:38.555515217 +0000 UTC m=+31.243146301" Sep 13 09:44:38.592009 systemd[1]: Created slice kubepods-besteffort-pod4053a089_8a17_44f7_92cc_1c5a5c56d67f.slice - libcontainer container kubepods-besteffort-pod4053a089_8a17_44f7_92cc_1c5a5c56d67f.slice. Sep 13 09:44:38.618053 kubelet[2668]: I0913 09:44:38.617966 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxtv\" (UniqueName: \"kubernetes.io/projected/4053a089-8a17-44f7-92cc-1c5a5c56d67f-kube-api-access-9qxtv\") pod \"whisker-7f496566fc-q4pts\" (UID: \"4053a089-8a17-44f7-92cc-1c5a5c56d67f\") " pod="calico-system/whisker-7f496566fc-q4pts" Sep 13 09:44:38.618232 kubelet[2668]: I0913 09:44:38.618215 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4053a089-8a17-44f7-92cc-1c5a5c56d67f-whisker-backend-key-pair\") pod \"whisker-7f496566fc-q4pts\" (UID: \"4053a089-8a17-44f7-92cc-1c5a5c56d67f\") " pod="calico-system/whisker-7f496566fc-q4pts" Sep 13 09:44:38.618365 kubelet[2668]: I0913 09:44:38.618351 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4053a089-8a17-44f7-92cc-1c5a5c56d67f-whisker-ca-bundle\") pod \"whisker-7f496566fc-q4pts\" (UID: \"4053a089-8a17-44f7-92cc-1c5a5c56d67f\") " pod="calico-system/whisker-7f496566fc-q4pts" Sep 13 09:44:38.896276 containerd[1530]: time="2025-09-13T09:44:38.896075232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f496566fc-q4pts,Uid:4053a089-8a17-44f7-92cc-1c5a5c56d67f,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:39.056978 systemd-networkd[1435]: cali0e85c9c5ae8: Link UP Sep 13 09:44:39.057160 systemd-networkd[1435]: cali0e85c9c5ae8: Gained carrier Sep 13 09:44:39.070698 containerd[1530]: 2025-09-13 09:44:38.915 [INFO][3808] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 09:44:39.070698 containerd[1530]: 2025-09-13 09:44:38.944 [INFO][3808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f496566fc--q4pts-eth0 whisker-7f496566fc- calico-system 4053a089-8a17-44f7-92cc-1c5a5c56d67f 903 0 2025-09-13 09:44:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f496566fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f496566fc-q4pts eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0e85c9c5ae8 [] [] }} ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-" Sep 13 09:44:39.070698 containerd[1530]: 2025-09-13 09:44:38.944 [INFO][3808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.070698 containerd[1530]: 2025-09-13 09:44:39.013 [INFO][3822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" HandleID="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Workload="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.013 [INFO][3822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" HandleID="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Workload="localhost-k8s-whisker--7f496566fc--q4pts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004705f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f496566fc-q4pts", "timestamp":"2025-09-13 09:44:39.01367892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.013 [INFO][3822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.013 [INFO][3822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.014 [INFO][3822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.024 [INFO][3822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" host="localhost" Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.030 [INFO][3822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.033 [INFO][3822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.035 [INFO][3822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.037 [INFO][3822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:39.070910 containerd[1530]: 2025-09-13 09:44:39.037 [INFO][3822] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" host="localhost" Sep 13 09:44:39.071108 containerd[1530]: 2025-09-13 09:44:39.038 [INFO][3822] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca Sep 13 09:44:39.071108 containerd[1530]: 2025-09-13 09:44:39.041 [INFO][3822] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" host="localhost" Sep 13 09:44:39.071108 containerd[1530]: 2025-09-13 09:44:39.045 [INFO][3822] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" host="localhost" Sep 13 09:44:39.071108 containerd[1530]: 2025-09-13 09:44:39.045 [INFO][3822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" host="localhost" Sep 13 09:44:39.071108 containerd[1530]: 2025-09-13 09:44:39.045 [INFO][3822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:39.071108 containerd[1530]: 2025-09-13 09:44:39.045 [INFO][3822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" HandleID="k8s-pod-network.46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Workload="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.071225 containerd[1530]: 2025-09-13 09:44:39.050 [INFO][3808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f496566fc--q4pts-eth0", GenerateName:"whisker-7f496566fc-", Namespace:"calico-system", SelfLink:"", UID:"4053a089-8a17-44f7-92cc-1c5a5c56d67f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f496566fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f496566fc-q4pts", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e85c9c5ae8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:39.071225 containerd[1530]: 2025-09-13 09:44:39.050 [INFO][3808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.071291 containerd[1530]: 2025-09-13 09:44:39.051 [INFO][3808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e85c9c5ae8 ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.071291 containerd[1530]: 2025-09-13 09:44:39.057 [INFO][3808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.071367 containerd[1530]: 2025-09-13 09:44:39.059 [INFO][3808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f496566fc--q4pts-eth0", GenerateName:"whisker-7f496566fc-", Namespace:"calico-system", SelfLink:"", UID:"4053a089-8a17-44f7-92cc-1c5a5c56d67f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f496566fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca", Pod:"whisker-7f496566fc-q4pts", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e85c9c5ae8", MAC:"16:26:34:27:79:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:39.071413 containerd[1530]: 2025-09-13 09:44:39.067 [INFO][3808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" Namespace="calico-system" Pod="whisker-7f496566fc-q4pts" WorkloadEndpoint="localhost-k8s-whisker--7f496566fc--q4pts-eth0" Sep 13 09:44:39.239417 containerd[1530]: time="2025-09-13T09:44:39.239310718Z" level=info msg="connecting to shim 46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca" address="unix:///run/containerd/s/3dfcd47599fac77f13d4724980b89229e887a28203980de0b5c3d56ba8f6846b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:39.272741 systemd[1]: Started cri-containerd-46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca.scope - libcontainer container 46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca. Sep 13 09:44:39.296343 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:39.330488 containerd[1530]: time="2025-09-13T09:44:39.329390582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f496566fc-q4pts,Uid:4053a089-8a17-44f7-92cc-1c5a5c56d67f,Namespace:calico-system,Attempt:0,} returns sandbox id \"46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca\"" Sep 13 09:44:39.337963 containerd[1530]: time="2025-09-13T09:44:39.337891987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 09:44:39.402007 kubelet[2668]: I0913 09:44:39.401780 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2adee010-a209-4bb0-8d7c-308d0cc347bd" path="/var/lib/kubelet/pods/2adee010-a209-4bb0-8d7c-308d0cc347bd/volumes" Sep 13 09:44:39.565350 systemd-networkd[1435]: vxlan.calico: Link UP Sep 13 09:44:39.565356 systemd-networkd[1435]: vxlan.calico: Gained carrier Sep 13 09:44:39.669258 containerd[1530]: time="2025-09-13T09:44:39.669218420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16\" id:\"19a63731ec73e20801ff1589e43be05b54f94d131d1866e6d46a2da25010d52e\" pid:4065 exit_status:1 exited_at:{seconds:1757756679 nanos:662530255}" Sep 13 09:44:40.102651 containerd[1530]: time="2025-09-13T09:44:40.102602800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:40.103156 containerd[1530]: time="2025-09-13T09:44:40.103127000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 09:44:40.103836 containerd[1530]: time="2025-09-13T09:44:40.103813401Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:40.105449 containerd[1530]: time="2025-09-13T09:44:40.105419042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:40.106467 containerd[1530]: time="2025-09-13T09:44:40.106427562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 768.500414ms" Sep 13 09:44:40.106467 containerd[1530]: time="2025-09-13T09:44:40.106460962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 09:44:40.109978 containerd[1530]: time="2025-09-13T09:44:40.109950165Z" level=info msg="CreateContainer within sandbox \"46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 09:44:40.116858 containerd[1530]: time="2025-09-13T09:44:40.116826049Z" level=info msg="Container b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:40.118794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965752876.mount: Deactivated successfully. Sep 13 09:44:40.125462 containerd[1530]: time="2025-09-13T09:44:40.125427055Z" level=info msg="CreateContainer within sandbox \"46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9\"" Sep 13 09:44:40.127595 containerd[1530]: time="2025-09-13T09:44:40.126635656Z" level=info msg="StartContainer for \"b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9\"" Sep 13 09:44:40.127951 containerd[1530]: time="2025-09-13T09:44:40.127924256Z" level=info msg="connecting to shim b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9" address="unix:///run/containerd/s/3dfcd47599fac77f13d4724980b89229e887a28203980de0b5c3d56ba8f6846b" protocol=ttrpc version=3 Sep 13 09:44:40.146702 systemd[1]: Started cri-containerd-b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9.scope - libcontainer container b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9. Sep 13 09:44:40.177959 containerd[1530]: time="2025-09-13T09:44:40.177919769Z" level=info msg="StartContainer for \"b4147f7a089c2b482294ea294d5e9cc7203ee046ff4385d930e0ecf30a7a73a9\" returns successfully" Sep 13 09:44:40.179698 containerd[1530]: time="2025-09-13T09:44:40.179666011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 09:44:40.599939 containerd[1530]: time="2025-09-13T09:44:40.599899167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16\" id:\"d5ab4d49be363460fc9961aa1f7a3015bdafcd67f2b68971d40e6939e7795476\" pid:4161 exit_status:1 exited_at:{seconds:1757756680 nanos:599494407}" Sep 13 09:44:40.976699 systemd-networkd[1435]: cali0e85c9c5ae8: Gained IPv6LL Sep 13 09:44:41.039723 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Sep 13 09:44:41.311836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482533804.mount: Deactivated successfully. Sep 13 09:44:41.324385 containerd[1530]: time="2025-09-13T09:44:41.324337030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:41.324894 containerd[1530]: time="2025-09-13T09:44:41.324868511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 09:44:41.325765 containerd[1530]: time="2025-09-13T09:44:41.325740911Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:41.328042 containerd[1530]: time="2025-09-13T09:44:41.327824433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:41.328726 containerd[1530]: time="2025-09-13T09:44:41.328698913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.148998942s" Sep 13 09:44:41.328764 containerd[1530]: time="2025-09-13T09:44:41.328729553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 09:44:41.333228 containerd[1530]: time="2025-09-13T09:44:41.333187156Z" level=info msg="CreateContainer within sandbox \"46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 09:44:41.338558 containerd[1530]: time="2025-09-13T09:44:41.338519839Z" level=info msg="Container 22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:41.346045 containerd[1530]: time="2025-09-13T09:44:41.346015924Z" level=info msg="CreateContainer within sandbox \"46d64ddafb9fbe5a698f6a116788916764d597e1be9371e12ddfc8a8d9f395ca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e\"" Sep 13 09:44:41.346597 containerd[1530]: time="2025-09-13T09:44:41.346573964Z" level=info msg="StartContainer for \"22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e\"" Sep 13 09:44:41.347629 containerd[1530]: time="2025-09-13T09:44:41.347531285Z" level=info msg="connecting to shim 22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e" address="unix:///run/containerd/s/3dfcd47599fac77f13d4724980b89229e887a28203980de0b5c3d56ba8f6846b" protocol=ttrpc version=3 Sep 13 09:44:41.368725 systemd[1]: Started cri-containerd-22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e.scope - libcontainer container 22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e. Sep 13 09:44:41.430727 containerd[1530]: time="2025-09-13T09:44:41.430693296Z" level=info msg="StartContainer for \"22578201603aaa6230bd6ac4f92d9d59170931e3005f475166e61c4afcccbf9e\" returns successfully" Sep 13 09:44:41.544909 kubelet[2668]: I0913 09:44:41.544386 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f496566fc-q4pts" podStartSLOduration=1.552071679 podStartE2EDuration="3.544371606s" podCreationTimestamp="2025-09-13 09:44:38 +0000 UTC" firstStartedPulling="2025-09-13 09:44:39.337088387 +0000 UTC m=+32.024719391" lastFinishedPulling="2025-09-13 09:44:41.329388314 +0000 UTC m=+34.017019318" observedRunningTime="2025-09-13 09:44:41.543600486 +0000 UTC m=+34.231231490" watchObservedRunningTime="2025-09-13 09:44:41.544371606 +0000 UTC m=+34.232002570" Sep 13 09:44:45.402016 containerd[1530]: time="2025-09-13T09:44:45.401543268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-6662p,Uid:2f76ad57-fd01-4555-89ee-97cc6fb621a4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 09:44:45.519458 systemd-networkd[1435]: calif19dd7638ae: Link UP Sep 13 09:44:45.519870 systemd-networkd[1435]: calif19dd7638ae: Gained carrier Sep 13 09:44:45.536743 containerd[1530]: 2025-09-13 09:44:45.448 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54677bd554--6662p-eth0 calico-apiserver-54677bd554- calico-apiserver 2f76ad57-fd01-4555-89ee-97cc6fb621a4 833 0 2025-09-13 09:44:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54677bd554 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54677bd554-6662p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif19dd7638ae [] [] }} ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-" Sep 13 09:44:45.536743 containerd[1530]: 2025-09-13 09:44:45.448 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.536743 containerd[1530]: 2025-09-13 09:44:45.480 [INFO][4244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" HandleID="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Workload="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.480 [INFO][4244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" HandleID="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Workload="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54677bd554-6662p", "timestamp":"2025-09-13 09:44:45.480685425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.480 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.480 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.480 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.490 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" host="localhost" Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.495 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.502 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.503 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.505 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:45.536928 containerd[1530]: 2025-09-13 09:44:45.505 [INFO][4244] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" host="localhost" Sep 13 09:44:45.537116 containerd[1530]: 2025-09-13 09:44:45.507 [INFO][4244] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712 Sep 13 09:44:45.537116 containerd[1530]: 2025-09-13 09:44:45.510 [INFO][4244] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" host="localhost" Sep 13 09:44:45.537116 containerd[1530]: 2025-09-13 09:44:45.515 [INFO][4244] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" host="localhost" Sep 13 09:44:45.537116 containerd[1530]: 2025-09-13 09:44:45.515 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" host="localhost" Sep 13 09:44:45.537116 containerd[1530]: 2025-09-13 09:44:45.515 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:45.537116 containerd[1530]: 2025-09-13 09:44:45.515 [INFO][4244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" HandleID="k8s-pod-network.85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Workload="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.537235 containerd[1530]: 2025-09-13 09:44:45.517 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54677bd554--6662p-eth0", GenerateName:"calico-apiserver-54677bd554-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f76ad57-fd01-4555-89ee-97cc6fb621a4", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54677bd554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54677bd554-6662p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif19dd7638ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:45.537292 containerd[1530]: 2025-09-13 09:44:45.517 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.537292 containerd[1530]: 2025-09-13 09:44:45.517 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif19dd7638ae ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.537292 containerd[1530]: 2025-09-13 09:44:45.519 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.537360 containerd[1530]: 2025-09-13 09:44:45.520 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54677bd554--6662p-eth0", GenerateName:"calico-apiserver-54677bd554-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f76ad57-fd01-4555-89ee-97cc6fb621a4", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54677bd554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712", Pod:"calico-apiserver-54677bd554-6662p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif19dd7638ae", MAC:"1a:28:f7:66:3d:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:45.537410 containerd[1530]: 2025-09-13 09:44:45.529 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-6662p" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--6662p-eth0" Sep 13 09:44:45.564051 containerd[1530]: time="2025-09-13T09:44:45.564009505Z" level=info msg="connecting to shim 85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712" address="unix:///run/containerd/s/9fb863512c1f529dedd2fb69a1316610c6fa6f342abbe3405941a1baeed1ba43" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:45.592716 systemd[1]: Started cri-containerd-85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712.scope - libcontainer container 85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712. Sep 13 09:44:45.604259 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:45.628782 containerd[1530]: time="2025-09-13T09:44:45.628746576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-6662p,Uid:2f76ad57-fd01-4555-89ee-97cc6fb621a4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712\"" Sep 13 09:44:45.632388 containerd[1530]: time="2025-09-13T09:44:45.632361498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 09:44:46.413140 containerd[1530]: time="2025-09-13T09:44:46.413088737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcdwp,Uid:78fb1ccb-2f32-457b-8b0e-b01246bef8fc,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:46.428862 containerd[1530]: time="2025-09-13T09:44:46.428811824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2xmw,Uid:99d74fae-09db-4ec5-b5b7-6da4c98bfcd0,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:46.603585 systemd-networkd[1435]: cali8631ede9125: Link UP Sep 13 09:44:46.604407 systemd-networkd[1435]: cali8631ede9125: Gained carrier Sep 13 09:44:46.640583 containerd[1530]: 2025-09-13 09:44:46.453 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jcdwp-eth0 csi-node-driver- calico-system 78fb1ccb-2f32-457b-8b0e-b01246bef8fc 714 0 2025-09-13 09:44:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jcdwp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8631ede9125 [] [] }} ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-" Sep 13 09:44:46.640583 containerd[1530]: 2025-09-13 09:44:46.453 [INFO][4308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.640583 containerd[1530]: 2025-09-13 09:44:46.513 [INFO][4323] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" HandleID="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Workload="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.514 [INFO][4323] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" HandleID="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Workload="localhost-k8s-csi--node--driver--jcdwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b9340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jcdwp", "timestamp":"2025-09-13 09:44:46.513605062 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.514 [INFO][4323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.514 [INFO][4323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.514 [INFO][4323] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.535 [INFO][4323] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" host="localhost" Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.541 [INFO][4323] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.553 [INFO][4323] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.556 [INFO][4323] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.559 [INFO][4323] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:46.640872 containerd[1530]: 2025-09-13 09:44:46.559 [INFO][4323] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" host="localhost" Sep 13 09:44:46.641068 containerd[1530]: 2025-09-13 09:44:46.560 [INFO][4323] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6 Sep 13 09:44:46.641068 containerd[1530]: 2025-09-13 09:44:46.572 [INFO][4323] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" host="localhost" Sep 13 09:44:46.641068 containerd[1530]: 2025-09-13 09:44:46.591 [INFO][4323] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" host="localhost" Sep 13 09:44:46.641068 containerd[1530]: 2025-09-13 09:44:46.591 [INFO][4323] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" host="localhost" Sep 13 09:44:46.641068 containerd[1530]: 2025-09-13 09:44:46.591 [INFO][4323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:46.641068 containerd[1530]: 2025-09-13 09:44:46.591 [INFO][4323] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" HandleID="k8s-pod-network.cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Workload="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.641172 containerd[1530]: 2025-09-13 09:44:46.598 [INFO][4308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jcdwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78fb1ccb-2f32-457b-8b0e-b01246bef8fc", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jcdwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8631ede9125", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:46.641239 containerd[1530]: 2025-09-13 09:44:46.599 [INFO][4308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.641239 containerd[1530]: 2025-09-13 09:44:46.599 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8631ede9125 ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.641239 containerd[1530]: 2025-09-13 09:44:46.605 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.641295 containerd[1530]: 2025-09-13 09:44:46.606 [INFO][4308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jcdwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78fb1ccb-2f32-457b-8b0e-b01246bef8fc", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6", Pod:"csi-node-driver-jcdwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8631ede9125", MAC:"c2:e4:57:a5:37:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:46.641343 containerd[1530]: 2025-09-13 09:44:46.632 [INFO][4308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" Namespace="calico-system" Pod="csi-node-driver-jcdwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--jcdwp-eth0" Sep 13 09:44:46.697710 systemd-networkd[1435]: calidd80da51cd1: Link UP Sep 13 09:44:46.698399 systemd-networkd[1435]: calidd80da51cd1: Gained carrier Sep 13 09:44:46.708851 containerd[1530]: time="2025-09-13T09:44:46.708810630Z" level=info msg="connecting to shim cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6" address="unix:///run/containerd/s/3bbfa38828d8c65b9c8bdc11fc84f908f35c8473347f120fb45a6ecfbfec88d3" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:46.725221 containerd[1530]: 2025-09-13 09:44:46.521 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0 coredns-674b8bbfcf- kube-system 99d74fae-09db-4ec5-b5b7-6da4c98bfcd0 834 0 2025-09-13 09:44:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-b2xmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd80da51cd1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-" Sep 13 09:44:46.725221 containerd[1530]: 2025-09-13 09:44:46.522 [INFO][4338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.725221 containerd[1530]: 2025-09-13 09:44:46.574 [INFO][4350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" HandleID="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Workload="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.574 [INFO][4350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" HandleID="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Workload="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a3310), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-b2xmw", "timestamp":"2025-09-13 09:44:46.574007209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.574 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.591 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.591 [INFO][4350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.637 [INFO][4350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" host="localhost" Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.649 [INFO][4350] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.659 [INFO][4350] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.665 [INFO][4350] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.672 [INFO][4350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:46.725520 containerd[1530]: 2025-09-13 09:44:46.673 [INFO][4350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" host="localhost" Sep 13 09:44:46.725801 containerd[1530]: 2025-09-13 09:44:46.678 [INFO][4350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a Sep 13 09:44:46.725801 containerd[1530]: 2025-09-13 09:44:46.683 [INFO][4350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" host="localhost" Sep 13 09:44:46.725801 containerd[1530]: 2025-09-13 09:44:46.689 [INFO][4350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" host="localhost" Sep 13 09:44:46.725801 containerd[1530]: 2025-09-13 09:44:46.689 [INFO][4350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" host="localhost" Sep 13 09:44:46.725801 containerd[1530]: 2025-09-13 09:44:46.689 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:46.725801 containerd[1530]: 2025-09-13 09:44:46.689 [INFO][4350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" HandleID="k8s-pod-network.0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Workload="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.725911 containerd[1530]: 2025-09-13 09:44:46.693 [INFO][4338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"99d74fae-09db-4ec5-b5b7-6da4c98bfcd0", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-b2xmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd80da51cd1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:46.725973 containerd[1530]: 2025-09-13 09:44:46.694 [INFO][4338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.725973 containerd[1530]: 2025-09-13 09:44:46.694 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd80da51cd1 ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.725973 containerd[1530]: 2025-09-13 09:44:46.699 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.726171 containerd[1530]: 2025-09-13 09:44:46.701 [INFO][4338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"99d74fae-09db-4ec5-b5b7-6da4c98bfcd0", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a", Pod:"coredns-674b8bbfcf-b2xmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd80da51cd1", MAC:"ba:d9:04:55:97:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:46.726171 containerd[1530]: 2025-09-13 09:44:46.719 [INFO][4338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2xmw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2xmw-eth0" Sep 13 09:44:46.738745 systemd[1]: Started cri-containerd-cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6.scope - libcontainer container cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6. Sep 13 09:44:46.752771 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:46.756989 containerd[1530]: time="2025-09-13T09:44:46.756942171Z" level=info msg="connecting to shim 0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a" address="unix:///run/containerd/s/885cdd931d90118d3962ae6158ff8de85aeafdf5c612b793e73fae02e31cc087" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:46.824709 systemd[1]: Started cri-containerd-0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a.scope - libcontainer container 0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a. Sep 13 09:44:46.842411 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:46.867472 containerd[1530]: time="2025-09-13T09:44:46.867422740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jcdwp,Uid:78fb1ccb-2f32-457b-8b0e-b01246bef8fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6\"" Sep 13 09:44:46.871379 containerd[1530]: time="2025-09-13T09:44:46.871336262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2xmw,Uid:99d74fae-09db-4ec5-b5b7-6da4c98bfcd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a\"" Sep 13 09:44:46.881372 containerd[1530]: time="2025-09-13T09:44:46.881341067Z" level=info msg="CreateContainer within sandbox \"0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 09:44:46.891409 containerd[1530]: time="2025-09-13T09:44:46.891375991Z" level=info msg="Container 2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:46.898749 containerd[1530]: time="2025-09-13T09:44:46.898711114Z" level=info msg="CreateContainer within sandbox \"0a0886578d0f750130023d7f3a43c859e193c7ba81e6a099dbd0205f67932a6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3\"" Sep 13 09:44:46.899563 containerd[1530]: time="2025-09-13T09:44:46.899523155Z" level=info msg="StartContainer for \"2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3\"" Sep 13 09:44:46.900716 containerd[1530]: time="2025-09-13T09:44:46.900690835Z" level=info msg="connecting to shim 2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3" address="unix:///run/containerd/s/885cdd931d90118d3962ae6158ff8de85aeafdf5c612b793e73fae02e31cc087" protocol=ttrpc version=3 Sep 13 09:44:46.933838 systemd[1]: Started cri-containerd-2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3.scope - libcontainer container 2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3. Sep 13 09:44:46.969140 containerd[1530]: time="2025-09-13T09:44:46.968350306Z" level=info msg="StartContainer for \"2f2442d164c51c59f8204c093e597aaa7255f1e8a0a2c5de57056b9f3f1fcbc3\" returns successfully" Sep 13 09:44:47.234021 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:54806.service - OpenSSH per-connection server daemon (10.0.0.1:54806). Sep 13 09:44:47.236133 containerd[1530]: time="2025-09-13T09:44:47.236093579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:47.236758 containerd[1530]: time="2025-09-13T09:44:47.236665139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 09:44:47.237466 containerd[1530]: time="2025-09-13T09:44:47.237438739Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:47.246613 containerd[1530]: time="2025-09-13T09:44:47.244616342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:47.246613 containerd[1530]: time="2025-09-13T09:44:47.245908383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.613517245s" Sep 13 09:44:47.246613 containerd[1530]: time="2025-09-13T09:44:47.245934623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 09:44:47.246777 containerd[1530]: time="2025-09-13T09:44:47.246749263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 09:44:47.291992 containerd[1530]: time="2025-09-13T09:44:47.291657922Z" level=info msg="CreateContainer within sandbox \"85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 09:44:47.298153 containerd[1530]: time="2025-09-13T09:44:47.298042805Z" level=info msg="Container 4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:47.308532 containerd[1530]: time="2025-09-13T09:44:47.308488569Z" level=info msg="CreateContainer within sandbox \"85db9dc8517369bae795e136fc1c76f837ca6ee902e0cdfd2709b34990522712\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753\"" Sep 13 09:44:47.310987 containerd[1530]: time="2025-09-13T09:44:47.310956250Z" level=info msg="StartContainer for \"4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753\"" Sep 13 09:44:47.312521 containerd[1530]: time="2025-09-13T09:44:47.312495611Z" level=info msg="connecting to shim 4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753" address="unix:///run/containerd/s/9fb863512c1f529dedd2fb69a1316610c6fa6f342abbe3405941a1baeed1ba43" protocol=ttrpc version=3 Sep 13 09:44:47.335634 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 54806 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:44:47.335699 systemd[1]: Started cri-containerd-4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753.scope - libcontainer container 4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753. Sep 13 09:44:47.337245 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:44:47.342895 systemd-logind[1509]: New session 8 of user core. Sep 13 09:44:47.353700 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 09:44:47.383073 containerd[1530]: time="2025-09-13T09:44:47.383032320Z" level=info msg="StartContainer for \"4ce074fc4eb3b2ce35d5e48282fc93ff5dfffdb42b095215bf76069a9e238753\" returns successfully" Sep 13 09:44:47.400035 containerd[1530]: time="2025-09-13T09:44:47.399993407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c649555f-mb6wd,Uid:96591e0b-8d86-4397-a1b1-01479c0f2d93,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:47.400447 containerd[1530]: time="2025-09-13T09:44:47.400418447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs6cj,Uid:ded51c04-df9a-49a6-9f3d-2646f74ffca6,Namespace:kube-system,Attempt:0,}" Sep 13 09:44:47.441028 systemd-networkd[1435]: calif19dd7638ae: Gained IPv6LL Sep 13 09:44:47.586629 kubelet[2668]: I0913 09:44:47.586093 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54677bd554-6662p" podStartSLOduration=24.97147916 podStartE2EDuration="26.586073725s" podCreationTimestamp="2025-09-13 09:44:21 +0000 UTC" firstStartedPulling="2025-09-13 09:44:45.632024698 +0000 UTC m=+38.319655702" lastFinishedPulling="2025-09-13 09:44:47.246619263 +0000 UTC m=+39.934250267" observedRunningTime="2025-09-13 09:44:47.583332964 +0000 UTC m=+40.270963968" watchObservedRunningTime="2025-09-13 09:44:47.586073725 +0000 UTC m=+40.273704769" Sep 13 09:44:47.611750 kubelet[2668]: I0913 09:44:47.611680 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b2xmw" podStartSLOduration=34.611649416 podStartE2EDuration="34.611649416s" podCreationTimestamp="2025-09-13 09:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:47.608404135 +0000 UTC m=+40.296035179" watchObservedRunningTime="2025-09-13 09:44:47.611649416 +0000 UTC m=+40.299280380" Sep 13 09:44:47.634417 systemd-networkd[1435]: cali523b7d1bae8: Link UP Sep 13 09:44:47.635200 systemd-networkd[1435]: cali523b7d1bae8: Gained carrier Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.462 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0 coredns-674b8bbfcf- kube-system ded51c04-df9a-49a6-9f3d-2646f74ffca6 836 0 2025-09-13 09:44:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hs6cj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali523b7d1bae8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.462 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.558 [INFO][4587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" HandleID="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Workload="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.558 [INFO][4587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" HandleID="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Workload="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bb850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hs6cj", "timestamp":"2025-09-13 09:44:47.558246314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.558 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.558 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.558 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.585 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.592 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.596 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.598 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.602 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.602 [INFO][4587] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.605 [INFO][4587] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5 Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.610 [INFO][4587] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.620 [INFO][4587] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.620 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" host="localhost" Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.620 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:47.658136 containerd[1530]: 2025-09-13 09:44:47.620 [INFO][4587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" HandleID="k8s-pod-network.514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Workload="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.658870 containerd[1530]: 2025-09-13 09:44:47.627 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ded51c04-df9a-49a6-9f3d-2646f74ffca6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hs6cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523b7d1bae8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:47.658870 containerd[1530]: 2025-09-13 09:44:47.627 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.658870 containerd[1530]: 2025-09-13 09:44:47.627 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali523b7d1bae8 ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.658870 containerd[1530]: 2025-09-13 09:44:47.634 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.658870 containerd[1530]: 2025-09-13 09:44:47.635 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ded51c04-df9a-49a6-9f3d-2646f74ffca6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5", Pod:"coredns-674b8bbfcf-hs6cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523b7d1bae8", MAC:"0a:95:db:f6:a2:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:47.658870 containerd[1530]: 2025-09-13 09:44:47.647 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs6cj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs6cj-eth0" Sep 13 09:44:47.728299 containerd[1530]: time="2025-09-13T09:44:47.728254105Z" level=info msg="connecting to shim 514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5" address="unix:///run/containerd/s/4ab9bf6bba17aae8fe386a823819135947e0736478eef5fdbd0db8241113ffca" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:47.736324 systemd-networkd[1435]: caliad042422b15: Link UP Sep 13 09:44:47.736813 systemd-networkd[1435]: caliad042422b15: Gained carrier Sep 13 09:44:47.742631 sshd[4531]: Connection closed by 10.0.0.1 port 54806 Sep 13 09:44:47.742705 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Sep 13 09:44:47.749414 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:54806.service: Deactivated successfully. Sep 13 09:44:47.751978 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 09:44:47.753152 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Sep 13 09:44:47.754975 systemd-logind[1509]: Removed session 8. Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.472 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0 calico-kube-controllers-55c649555f- calico-system 96591e0b-8d86-4397-a1b1-01479c0f2d93 835 0 2025-09-13 09:44:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55c649555f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55c649555f-mb6wd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad042422b15 [] [] }} ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.472 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.565 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" HandleID="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Workload="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.565 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" HandleID="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Workload="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55c649555f-mb6wd", "timestamp":"2025-09-13 09:44:47.565155476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.565 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.620 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.621 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.687 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.694 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.700 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.702 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.707 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.707 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.710 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5 Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.718 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.730 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.730 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" host="localhost" Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.731 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:47.757035 containerd[1530]: 2025-09-13 09:44:47.731 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" HandleID="k8s-pod-network.3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Workload="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.757488 containerd[1530]: 2025-09-13 09:44:47.733 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0", GenerateName:"calico-kube-controllers-55c649555f-", Namespace:"calico-system", SelfLink:"", UID:"96591e0b-8d86-4397-a1b1-01479c0f2d93", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c649555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55c649555f-mb6wd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad042422b15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:47.757488 containerd[1530]: 2025-09-13 09:44:47.733 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.757488 containerd[1530]: 2025-09-13 09:44:47.734 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad042422b15 ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.757488 containerd[1530]: 2025-09-13 09:44:47.737 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.757488 containerd[1530]: 2025-09-13 09:44:47.737 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0", GenerateName:"calico-kube-controllers-55c649555f-", Namespace:"calico-system", SelfLink:"", UID:"96591e0b-8d86-4397-a1b1-01479c0f2d93", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c649555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5", Pod:"calico-kube-controllers-55c649555f-mb6wd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad042422b15", MAC:"7a:82:1d:20:73:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:47.757488 containerd[1530]: 2025-09-13 09:44:47.752 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" Namespace="calico-system" Pod="calico-kube-controllers-55c649555f-mb6wd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c649555f--mb6wd-eth0" Sep 13 09:44:47.770999 systemd[1]: Started cri-containerd-514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5.scope - libcontainer container 514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5. Sep 13 09:44:47.784041 containerd[1530]: time="2025-09-13T09:44:47.783984728Z" level=info msg="connecting to shim 3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5" address="unix:///run/containerd/s/95976b41794852427bae3302aa11de7d2d3f8fd53375576cf610836b13dc6cd7" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:47.788226 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:47.811943 systemd[1]: Started cri-containerd-3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5.scope - libcontainer container 3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5. Sep 13 09:44:47.823316 containerd[1530]: time="2025-09-13T09:44:47.823278385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs6cj,Uid:ded51c04-df9a-49a6-9f3d-2646f74ffca6,Namespace:kube-system,Attempt:0,} returns sandbox id \"514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5\"" Sep 13 09:44:47.829662 containerd[1530]: time="2025-09-13T09:44:47.829628827Z" level=info msg="CreateContainer within sandbox \"514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 09:44:47.833691 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:47.839637 containerd[1530]: time="2025-09-13T09:44:47.839208231Z" level=info msg="Container 1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:47.855738 containerd[1530]: time="2025-09-13T09:44:47.846492354Z" level=info msg="CreateContainer within sandbox \"514e7d5604dbe7db6657ffebc2a193fdd24e5095877f63bbd4c7e54e5bd7b2e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457\"" Sep 13 09:44:47.856201 containerd[1530]: time="2025-09-13T09:44:47.856147958Z" level=info msg="StartContainer for \"1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457\"" Sep 13 09:44:47.857501 containerd[1530]: time="2025-09-13T09:44:47.857471039Z" level=info msg="connecting to shim 1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457" address="unix:///run/containerd/s/4ab9bf6bba17aae8fe386a823819135947e0736478eef5fdbd0db8241113ffca" protocol=ttrpc version=3 Sep 13 09:44:47.865813 containerd[1530]: time="2025-09-13T09:44:47.865784362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c649555f-mb6wd,Uid:96591e0b-8d86-4397-a1b1-01479c0f2d93,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5\"" Sep 13 09:44:47.884728 systemd[1]: Started cri-containerd-1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457.scope - libcontainer container 1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457. Sep 13 09:44:47.912311 containerd[1530]: time="2025-09-13T09:44:47.912222582Z" level=info msg="StartContainer for \"1a547b6aa3b75dfdf347ece8494952051eff467e955a0862b4251d055d9de457\" returns successfully" Sep 13 09:44:48.139766 containerd[1530]: time="2025-09-13T09:44:48.139667033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:48.140456 containerd[1530]: time="2025-09-13T09:44:48.140267794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 09:44:48.140929 containerd[1530]: time="2025-09-13T09:44:48.140906274Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:48.143555 containerd[1530]: time="2025-09-13T09:44:48.143255515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:48.143870 containerd[1530]: time="2025-09-13T09:44:48.143840675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 897.064372ms" Sep 13 09:44:48.143902 containerd[1530]: time="2025-09-13T09:44:48.143873995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 09:44:48.146693 containerd[1530]: time="2025-09-13T09:44:48.146608836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 09:44:48.148593 containerd[1530]: time="2025-09-13T09:44:48.148264357Z" level=info msg="CreateContainer within sandbox \"cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 09:44:48.160137 containerd[1530]: time="2025-09-13T09:44:48.160106681Z" level=info msg="Container 5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:48.172843 containerd[1530]: time="2025-09-13T09:44:48.172808246Z" level=info msg="CreateContainer within sandbox \"cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024\"" Sep 13 09:44:48.173344 containerd[1530]: time="2025-09-13T09:44:48.173316247Z" level=info msg="StartContainer for \"5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024\"" Sep 13 09:44:48.175127 containerd[1530]: time="2025-09-13T09:44:48.174833487Z" level=info msg="connecting to shim 5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024" address="unix:///run/containerd/s/3bbfa38828d8c65b9c8bdc11fc84f908f35c8473347f120fb45a6ecfbfec88d3" protocol=ttrpc version=3 Sep 13 09:44:48.195692 systemd[1]: Started cri-containerd-5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024.scope - libcontainer container 5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024. Sep 13 09:44:48.242048 containerd[1530]: time="2025-09-13T09:44:48.241887394Z" level=info msg="StartContainer for \"5ff9c844431e0acc7a38b9a6ed60566344dfdf13551e2b622fa227244985f024\" returns successfully" Sep 13 09:44:48.335798 systemd-networkd[1435]: calidd80da51cd1: Gained IPv6LL Sep 13 09:44:48.400781 containerd[1530]: time="2025-09-13T09:44:48.400629576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-psmpc,Uid:e5aefee4-e7ee-4daf-b20d-499e2884454d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 09:44:48.401097 systemd-networkd[1435]: cali8631ede9125: Gained IPv6LL Sep 13 09:44:48.499430 systemd-networkd[1435]: calif557e7f0b07: Link UP Sep 13 09:44:48.499974 systemd-networkd[1435]: calif557e7f0b07: Gained carrier Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.436 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0 calico-apiserver-54677bd554- calico-apiserver e5aefee4-e7ee-4daf-b20d-499e2884454d 837 0 2025-09-13 09:44:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54677bd554 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54677bd554-psmpc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif557e7f0b07 [] [] }} ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.436 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.460 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" HandleID="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Workload="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.461 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" HandleID="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Workload="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54677bd554-psmpc", "timestamp":"2025-09-13 09:44:48.46094296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.461 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.461 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.461 [INFO][4801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.471 [INFO][4801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.475 [INFO][4801] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.479 [INFO][4801] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.481 [INFO][4801] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.483 [INFO][4801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.483 [INFO][4801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.484 [INFO][4801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.488 [INFO][4801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.494 [INFO][4801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.494 [INFO][4801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" host="localhost" Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.494 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:48.516460 containerd[1530]: 2025-09-13 09:44:48.494 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" HandleID="k8s-pod-network.9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Workload="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.516951 containerd[1530]: 2025-09-13 09:44:48.496 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0", GenerateName:"calico-apiserver-54677bd554-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5aefee4-e7ee-4daf-b20d-499e2884454d", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54677bd554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54677bd554-psmpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif557e7f0b07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:48.516951 containerd[1530]: 2025-09-13 09:44:48.496 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.516951 containerd[1530]: 2025-09-13 09:44:48.496 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif557e7f0b07 ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.516951 containerd[1530]: 2025-09-13 09:44:48.500 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.516951 containerd[1530]: 2025-09-13 09:44:48.501 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0", GenerateName:"calico-apiserver-54677bd554-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5aefee4-e7ee-4daf-b20d-499e2884454d", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54677bd554", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a", Pod:"calico-apiserver-54677bd554-psmpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif557e7f0b07", MAC:"3a:a5:bd:66:f4:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:48.516951 containerd[1530]: 2025-09-13 09:44:48.510 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" Namespace="calico-apiserver" Pod="calico-apiserver-54677bd554-psmpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--54677bd554--psmpc-eth0" Sep 13 09:44:48.534894 containerd[1530]: time="2025-09-13T09:44:48.534858709Z" level=info msg="connecting to shim 9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a" address="unix:///run/containerd/s/366b13315f6383569e5bbb0c7ca98938fadaa480f2cf09ce254ff4e488224d55" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:48.564704 systemd[1]: Started cri-containerd-9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a.scope - libcontainer container 9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a. Sep 13 09:44:48.575726 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:48.596605 kubelet[2668]: I0913 09:44:48.596571 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 09:44:48.603342 containerd[1530]: time="2025-09-13T09:44:48.603303455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54677bd554-psmpc,Uid:e5aefee4-e7ee-4daf-b20d-499e2884454d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a\"" Sep 13 09:44:48.607176 kubelet[2668]: I0913 09:44:48.607115 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hs6cj" podStartSLOduration=35.607089857 podStartE2EDuration="35.607089857s" podCreationTimestamp="2025-09-13 09:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:48.600402374 +0000 UTC m=+41.288033378" watchObservedRunningTime="2025-09-13 09:44:48.607089857 +0000 UTC m=+41.294720861" Sep 13 09:44:48.611370 containerd[1530]: time="2025-09-13T09:44:48.611340699Z" level=info msg="CreateContainer within sandbox \"9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 09:44:48.624942 containerd[1530]: time="2025-09-13T09:44:48.623332063Z" level=info msg="Container f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:48.633248 containerd[1530]: time="2025-09-13T09:44:48.633210747Z" level=info msg="CreateContainer within sandbox \"9f2ec65a5568523fd167aad800d9e702cc11b7bec15eea0bcdb857e5cc594c3a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2\"" Sep 13 09:44:48.634772 containerd[1530]: time="2025-09-13T09:44:48.634678908Z" level=info msg="StartContainer for \"f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2\"" Sep 13 09:44:48.640083 containerd[1530]: time="2025-09-13T09:44:48.639711990Z" level=info msg="connecting to shim f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2" address="unix:///run/containerd/s/366b13315f6383569e5bbb0c7ca98938fadaa480f2cf09ce254ff4e488224d55" protocol=ttrpc version=3 Sep 13 09:44:48.665702 systemd[1]: Started cri-containerd-f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2.scope - libcontainer container f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2. Sep 13 09:44:48.704487 containerd[1530]: time="2025-09-13T09:44:48.704444415Z" level=info msg="StartContainer for \"f3ea3935c1ca3d17c1700e6f63d88863ad3e0f4f58080e64286a85e4dfe882a2\" returns successfully" Sep 13 09:44:49.041017 systemd-networkd[1435]: caliad042422b15: Gained IPv6LL Sep 13 09:44:49.401501 containerd[1530]: time="2025-09-13T09:44:49.401153639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4ttmc,Uid:b8b03254-6243-448c-a760-f348a82440e4,Namespace:calico-system,Attempt:0,}" Sep 13 09:44:49.552675 systemd-networkd[1435]: cali523b7d1bae8: Gained IPv6LL Sep 13 09:44:49.571050 systemd-networkd[1435]: cali13702596c5d: Link UP Sep 13 09:44:49.573004 systemd-networkd[1435]: cali13702596c5d: Gained carrier Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.462 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--4ttmc-eth0 goldmane-54d579b49d- calico-system b8b03254-6243-448c-a760-f348a82440e4 838 0 2025-09-13 09:44:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-4ttmc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali13702596c5d [] [] }} ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.462 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.498 [INFO][4925] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" HandleID="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Workload="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.499 [INFO][4925] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" HandleID="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Workload="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033d770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-4ttmc", "timestamp":"2025-09-13 09:44:49.497979115 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.499 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.499 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.499 [INFO][4925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.514 [INFO][4925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.522 [INFO][4925] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.528 [INFO][4925] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.532 [INFO][4925] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.535 [INFO][4925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.536 [INFO][4925] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.539 [INFO][4925] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566 Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.547 [INFO][4925] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.555 [INFO][4925] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.556 [INFO][4925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" host="localhost" Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.556 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 09:44:49.593584 containerd[1530]: 2025-09-13 09:44:49.556 [INFO][4925] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" HandleID="k8s-pod-network.5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Workload="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.595154 containerd[1530]: 2025-09-13 09:44:49.562 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--4ttmc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b8b03254-6243-448c-a760-f348a82440e4", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-4ttmc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali13702596c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:49.595154 containerd[1530]: 2025-09-13 09:44:49.562 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.595154 containerd[1530]: 2025-09-13 09:44:49.562 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13702596c5d ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.595154 containerd[1530]: 2025-09-13 09:44:49.572 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.595154 containerd[1530]: 2025-09-13 09:44:49.575 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--4ttmc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b8b03254-6243-448c-a760-f348a82440e4", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 9, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566", Pod:"goldmane-54d579b49d-4ttmc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali13702596c5d", MAC:"f6:aa:f0:15:00:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 09:44:49.595154 containerd[1530]: 2025-09-13 09:44:49.587 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" Namespace="calico-system" Pod="goldmane-54d579b49d-4ttmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--4ttmc-eth0" Sep 13 09:44:49.696737 containerd[1530]: time="2025-09-13T09:44:49.696444948Z" level=info msg="connecting to shim 5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566" address="unix:///run/containerd/s/9648e3e38b1b7290608c67c84b92772d7c5197f4fc860087cbe24c760f59240c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 09:44:49.741806 systemd[1]: Started cri-containerd-5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566.scope - libcontainer container 5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566. Sep 13 09:44:49.762036 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 09:44:49.811046 containerd[1530]: time="2025-09-13T09:44:49.810886430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-4ttmc,Uid:b8b03254-6243-448c-a760-f348a82440e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566\"" Sep 13 09:44:49.912882 containerd[1530]: time="2025-09-13T09:44:49.912837587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:49.919191 containerd[1530]: time="2025-09-13T09:44:49.919151510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 09:44:49.920458 containerd[1530]: time="2025-09-13T09:44:49.920372550Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:49.926424 containerd[1530]: time="2025-09-13T09:44:49.926364952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:49.927097 containerd[1530]: time="2025-09-13T09:44:49.927052073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.780399157s" Sep 13 09:44:49.927097 containerd[1530]: time="2025-09-13T09:44:49.927089793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 09:44:49.928819 containerd[1530]: time="2025-09-13T09:44:49.928579913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 09:44:49.942121 containerd[1530]: time="2025-09-13T09:44:49.941908398Z" level=info msg="CreateContainer within sandbox \"3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 09:44:49.950158 containerd[1530]: time="2025-09-13T09:44:49.950073641Z" level=info msg="Container 23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:49.964324 containerd[1530]: time="2025-09-13T09:44:49.964230246Z" level=info msg="CreateContainer within sandbox \"3a8dccb403cc8f6dfdaed429ada87adaa8640a70c368dc3f3fd6f4e3f4016ad5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d\"" Sep 13 09:44:49.965894 containerd[1530]: time="2025-09-13T09:44:49.965862327Z" level=info msg="StartContainer for \"23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d\"" Sep 13 09:44:49.967799 containerd[1530]: time="2025-09-13T09:44:49.967770208Z" level=info msg="connecting to shim 23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d" address="unix:///run/containerd/s/95976b41794852427bae3302aa11de7d2d3f8fd53375576cf610836b13dc6cd7" protocol=ttrpc version=3 Sep 13 09:44:49.993703 systemd[1]: Started cri-containerd-23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d.scope - libcontainer container 23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d. Sep 13 09:44:50.057529 containerd[1530]: time="2025-09-13T09:44:50.057493319Z" level=info msg="StartContainer for \"23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d\" returns successfully" Sep 13 09:44:50.511718 systemd-networkd[1435]: calif557e7f0b07: Gained IPv6LL Sep 13 09:44:50.620711 kubelet[2668]: I0913 09:44:50.620657 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55c649555f-mb6wd" podStartSLOduration=22.560709844 podStartE2EDuration="24.620640034s" podCreationTimestamp="2025-09-13 09:44:26 +0000 UTC" firstStartedPulling="2025-09-13 09:44:47.868489763 +0000 UTC m=+40.556120767" lastFinishedPulling="2025-09-13 09:44:49.928419953 +0000 UTC m=+42.616050957" observedRunningTime="2025-09-13 09:44:50.619142793 +0000 UTC m=+43.306773797" watchObservedRunningTime="2025-09-13 09:44:50.620640034 +0000 UTC m=+43.308271078" Sep 13 09:44:50.621872 kubelet[2668]: I0913 09:44:50.620754 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54677bd554-psmpc" podStartSLOduration=29.620748234 podStartE2EDuration="29.620748234s" podCreationTimestamp="2025-09-13 09:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 09:44:49.624066441 +0000 UTC m=+42.311697445" watchObservedRunningTime="2025-09-13 09:44:50.620748234 +0000 UTC m=+43.308379238" Sep 13 09:44:51.407755 systemd-networkd[1435]: cali13702596c5d: Gained IPv6LL Sep 13 09:44:51.457418 containerd[1530]: time="2025-09-13T09:44:51.457353153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:51.458303 containerd[1530]: time="2025-09-13T09:44:51.458265273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 09:44:51.459091 containerd[1530]: time="2025-09-13T09:44:51.459042033Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:51.461333 containerd[1530]: time="2025-09-13T09:44:51.461286434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:51.461864 containerd[1530]: time="2025-09-13T09:44:51.461829354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.533215321s" Sep 13 09:44:51.461911 containerd[1530]: time="2025-09-13T09:44:51.461863514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 09:44:51.464885 containerd[1530]: time="2025-09-13T09:44:51.463833475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 09:44:51.466285 containerd[1530]: time="2025-09-13T09:44:51.466258195Z" level=info msg="CreateContainer within sandbox \"cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 09:44:51.473779 containerd[1530]: time="2025-09-13T09:44:51.473734598Z" level=info msg="Container 6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:51.483225 containerd[1530]: time="2025-09-13T09:44:51.483153241Z" level=info msg="CreateContainer within sandbox \"cc739ad3a2e93990876cf44146d32f5365a0c6d565a743e87bc126073d53b2f6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1\"" Sep 13 09:44:51.483794 containerd[1530]: time="2025-09-13T09:44:51.483735121Z" level=info msg="StartContainer for \"6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1\"" Sep 13 09:44:51.485296 containerd[1530]: time="2025-09-13T09:44:51.485262282Z" level=info msg="connecting to shim 6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1" address="unix:///run/containerd/s/3bbfa38828d8c65b9c8bdc11fc84f908f35c8473347f120fb45a6ecfbfec88d3" protocol=ttrpc version=3 Sep 13 09:44:51.505742 systemd[1]: Started cri-containerd-6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1.scope - libcontainer container 6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1. Sep 13 09:44:51.571904 containerd[1530]: time="2025-09-13T09:44:51.571801510Z" level=info msg="StartContainer for \"6d96b879962cf147f27a3a02d2b525ea02fa5d08670d4c6ced80c188a1a69ef1\" returns successfully" Sep 13 09:44:51.610957 kubelet[2668]: I0913 09:44:51.610925 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 09:44:51.623245 kubelet[2668]: I0913 09:44:51.623132 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jcdwp" podStartSLOduration=21.028805673 podStartE2EDuration="25.623116846s" podCreationTimestamp="2025-09-13 09:44:26 +0000 UTC" firstStartedPulling="2025-09-13 09:44:46.868656621 +0000 UTC m=+39.556287625" lastFinishedPulling="2025-09-13 09:44:51.462967834 +0000 UTC m=+44.150598798" observedRunningTime="2025-09-13 09:44:51.622404526 +0000 UTC m=+44.310035530" watchObservedRunningTime="2025-09-13 09:44:51.623116846 +0000 UTC m=+44.310747850" Sep 13 09:44:52.053520 containerd[1530]: time="2025-09-13T09:44:52.053470024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d\" id:\"24a9bfe8677c2d9744f2274d4f809c11ee70bc6812ce65ef8f04610d29f68f4d\" pid:5094 exited_at:{seconds:1757756692 nanos:53171024}" Sep 13 09:44:52.099845 containerd[1530]: time="2025-09-13T09:44:52.099809318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d\" id:\"bcd477730d7ea83f4e209483b56ced58e4a3dea594abbb061b56692a3304ae78\" pid:5116 exited_at:{seconds:1757756692 nanos:99593558}" Sep 13 09:44:52.477455 kubelet[2668]: I0913 09:44:52.477255 2668 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 09:44:52.486567 kubelet[2668]: I0913 09:44:52.486212 2668 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 09:44:52.756962 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:36654.service - OpenSSH per-connection server daemon (10.0.0.1:36654). Sep 13 09:44:52.833216 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 36654 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:44:52.835465 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:44:52.843967 systemd-logind[1509]: New session 9 of user core. Sep 13 09:44:52.849734 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 09:44:52.872473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671947925.mount: Deactivated successfully. Sep 13 09:44:53.056089 sshd[5137]: Connection closed by 10.0.0.1 port 36654 Sep 13 09:44:53.056941 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Sep 13 09:44:53.069405 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:36654.service: Deactivated successfully. Sep 13 09:44:53.072885 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 09:44:53.073823 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Sep 13 09:44:53.077868 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Sep 13 09:44:53.078915 systemd-logind[1509]: Removed session 9. Sep 13 09:44:53.137401 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:44:53.140244 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:44:53.145758 systemd-logind[1509]: New session 10 of user core. Sep 13 09:44:53.153777 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 09:44:53.278677 containerd[1530]: time="2025-09-13T09:44:53.278632631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:53.279831 containerd[1530]: time="2025-09-13T09:44:53.279806111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 09:44:53.280497 containerd[1530]: time="2025-09-13T09:44:53.280462311Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:53.283574 containerd[1530]: time="2025-09-13T09:44:53.283036832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 09:44:53.283703 containerd[1530]: time="2025-09-13T09:44:53.283681232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.819816077s" Sep 13 09:44:53.283776 containerd[1530]: time="2025-09-13T09:44:53.283761912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 09:44:53.291674 containerd[1530]: time="2025-09-13T09:44:53.291644874Z" level=info msg="CreateContainer within sandbox \"5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 09:44:53.372940 containerd[1530]: time="2025-09-13T09:44:53.372799897Z" level=info msg="Container f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd: CDI devices from CRI Config.CDIDevices: []" Sep 13 09:44:53.375165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87748070.mount: Deactivated successfully. Sep 13 09:44:53.381290 containerd[1530]: time="2025-09-13T09:44:53.381236260Z" level=info msg="CreateContainer within sandbox \"5f96fbb64eeff559849dda2b7e54f81159b10ed218bc07f870748c3c7a881566\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\"" Sep 13 09:44:53.381995 containerd[1530]: time="2025-09-13T09:44:53.381889060Z" level=info msg="StartContainer for \"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\"" Sep 13 09:44:53.383429 containerd[1530]: time="2025-09-13T09:44:53.383127620Z" level=info msg="connecting to shim f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd" address="unix:///run/containerd/s/9648e3e38b1b7290608c67c84b92772d7c5197f4fc860087cbe24c760f59240c" protocol=ttrpc version=3 Sep 13 09:44:53.406705 systemd[1]: Started cri-containerd-f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd.scope - libcontainer container f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd. Sep 13 09:44:53.410591 sshd[5158]: Connection closed by 10.0.0.1 port 36656 Sep 13 09:44:53.411310 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Sep 13 09:44:53.419449 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:36656.service: Deactivated successfully. Sep 13 09:44:53.422517 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 09:44:53.423843 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Sep 13 09:44:53.427101 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:36660.service - OpenSSH per-connection server daemon (10.0.0.1:36660). Sep 13 09:44:53.429049 systemd-logind[1509]: Removed session 10. Sep 13 09:44:53.472675 containerd[1530]: time="2025-09-13T09:44:53.472605766Z" level=info msg="StartContainer for \"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\" returns successfully" Sep 13 09:44:53.485512 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 36660 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:44:53.486773 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:44:53.495498 systemd-logind[1509]: New session 11 of user core. Sep 13 09:44:53.509978 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 09:44:53.652521 sshd[5207]: Connection closed by 10.0.0.1 port 36660 Sep 13 09:44:53.655746 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Sep 13 09:44:53.659294 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:36660.service: Deactivated successfully. Sep 13 09:44:53.664397 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 09:44:53.665206 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Sep 13 09:44:53.666741 systemd-logind[1509]: Removed session 11. Sep 13 09:44:54.705331 containerd[1530]: time="2025-09-13T09:44:54.705292104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\" id:\"242223ef76aad446c234df1d4ebfe33e0420a8079b0755f2d7a0b2fef7428e74\" pid:5237 exit_status:1 exited_at:{seconds:1757756694 nanos:704755944}" Sep 13 09:44:55.719937 containerd[1530]: time="2025-09-13T09:44:55.719877122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\" id:\"113ee9bd279f7aaed43796c227adc5d20c6af6ada8961638598f062270b0024b\" pid:5262 exit_status:1 exited_at:{seconds:1757756695 nanos:719533362}" Sep 13 09:44:58.666738 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:36676.service - OpenSSH per-connection server daemon (10.0.0.1:36676). Sep 13 09:44:58.726324 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 36676 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:44:58.727500 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:44:58.731142 systemd-logind[1509]: New session 12 of user core. Sep 13 09:44:58.738689 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 09:44:58.908884 sshd[5286]: Connection closed by 10.0.0.1 port 36676 Sep 13 09:44:58.909201 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Sep 13 09:44:58.913044 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:36676.service: Deactivated successfully. Sep 13 09:44:58.914970 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 09:44:58.915723 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Sep 13 09:44:58.916909 systemd-logind[1509]: Removed session 12. Sep 13 09:45:03.928415 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:51516.service - OpenSSH per-connection server daemon (10.0.0.1:51516). Sep 13 09:45:03.984907 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:03.988926 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:03.994403 systemd-logind[1509]: New session 13 of user core. Sep 13 09:45:04.004695 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 09:45:04.146934 sshd[5310]: Connection closed by 10.0.0.1 port 51516 Sep 13 09:45:04.147601 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:04.150869 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Sep 13 09:45:04.151093 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:51516.service: Deactivated successfully. Sep 13 09:45:04.152721 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 09:45:04.153814 systemd-logind[1509]: Removed session 13. Sep 13 09:45:09.160405 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:51528.service - OpenSSH per-connection server daemon (10.0.0.1:51528). Sep 13 09:45:09.223232 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 51528 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:09.225617 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:09.231445 systemd-logind[1509]: New session 14 of user core. Sep 13 09:45:09.236857 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 09:45:09.416775 sshd[5331]: Connection closed by 10.0.0.1 port 51528 Sep 13 09:45:09.417013 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:09.421470 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:51528.service: Deactivated successfully. Sep 13 09:45:09.423886 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 09:45:09.425998 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Sep 13 09:45:09.428398 systemd-logind[1509]: Removed session 14. Sep 13 09:45:09.630425 kubelet[2668]: I0913 09:45:09.629996 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 09:45:09.672571 kubelet[2668]: I0913 09:45:09.671747 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-4ttmc" podStartSLOduration=40.208231952 podStartE2EDuration="43.67173191s" podCreationTimestamp="2025-09-13 09:44:26 +0000 UTC" firstStartedPulling="2025-09-13 09:44:49.821170794 +0000 UTC m=+42.508801758" lastFinishedPulling="2025-09-13 09:44:53.284670712 +0000 UTC m=+45.972301716" observedRunningTime="2025-09-13 09:44:53.639316613 +0000 UTC m=+46.326947617" watchObservedRunningTime="2025-09-13 09:45:09.67173191 +0000 UTC m=+62.359362914" Sep 13 09:45:10.601372 containerd[1530]: time="2025-09-13T09:45:10.601273600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf1d76d6dd1208327e40aa5dcd9bbebab7577ad4699e40c2a57be8379f26b16\" id:\"13784da9d7217a6e30fedd43b02614f45b8723d5265dd16aa73a0f5d8547a67e\" pid:5358 exited_at:{seconds:1757756710 nanos:600806480}" Sep 13 09:45:14.433157 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:44688.service - OpenSSH per-connection server daemon (10.0.0.1:44688). Sep 13 09:45:14.503981 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 44688 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:14.506004 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:14.511254 systemd-logind[1509]: New session 15 of user core. Sep 13 09:45:14.513700 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 09:45:14.740785 sshd[5377]: Connection closed by 10.0.0.1 port 44688 Sep 13 09:45:14.741461 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:14.751424 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:44688.service: Deactivated successfully. Sep 13 09:45:14.754872 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 09:45:14.755682 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Sep 13 09:45:14.758124 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:44690.service - OpenSSH per-connection server daemon (10.0.0.1:44690). Sep 13 09:45:14.758994 systemd-logind[1509]: Removed session 15. Sep 13 09:45:14.820577 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 44690 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:14.821159 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:14.828416 systemd-logind[1509]: New session 16 of user core. Sep 13 09:45:14.835789 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 09:45:15.056170 sshd[5394]: Connection closed by 10.0.0.1 port 44690 Sep 13 09:45:15.056923 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:15.069426 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:44690.service: Deactivated successfully. Sep 13 09:45:15.075519 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 09:45:15.076792 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Sep 13 09:45:15.079656 systemd-logind[1509]: Removed session 16. Sep 13 09:45:15.081317 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:44692.service - OpenSSH per-connection server daemon (10.0.0.1:44692). Sep 13 09:45:15.137929 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 44692 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:15.139310 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:15.145448 systemd-logind[1509]: New session 17 of user core. Sep 13 09:45:15.153736 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 09:45:15.777710 sshd[5409]: Connection closed by 10.0.0.1 port 44692 Sep 13 09:45:15.778696 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:15.791193 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:44692.service: Deactivated successfully. Sep 13 09:45:15.795372 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 09:45:15.797052 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Sep 13 09:45:15.803810 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:44706.service - OpenSSH per-connection server daemon (10.0.0.1:44706). Sep 13 09:45:15.804745 systemd-logind[1509]: Removed session 17. Sep 13 09:45:15.864372 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 44706 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:15.865886 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:15.870058 systemd-logind[1509]: New session 18 of user core. Sep 13 09:45:15.879715 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 09:45:16.156137 sshd[5431]: Connection closed by 10.0.0.1 port 44706 Sep 13 09:45:16.156833 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:16.165146 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:44706.service: Deactivated successfully. Sep 13 09:45:16.168114 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 09:45:16.170043 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Sep 13 09:45:16.172610 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:44712.service - OpenSSH per-connection server daemon (10.0.0.1:44712). Sep 13 09:45:16.177974 systemd-logind[1509]: Removed session 18. Sep 13 09:45:16.236415 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 44712 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:16.237885 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:16.244974 systemd-logind[1509]: New session 19 of user core. Sep 13 09:45:16.249722 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 09:45:16.389383 containerd[1530]: time="2025-09-13T09:45:16.389338125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\" id:\"7f58331870c090b7d317317d19c0d19bf833e91cdf29dfe1b46e73b62a292746\" pid:5460 exited_at:{seconds:1757756716 nanos:389070842}" Sep 13 09:45:16.438222 sshd[5447]: Connection closed by 10.0.0.1 port 44712 Sep 13 09:45:16.439036 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:16.442937 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:44712.service: Deactivated successfully. Sep 13 09:45:16.445233 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 09:45:16.446522 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Sep 13 09:45:16.448030 systemd-logind[1509]: Removed session 19. Sep 13 09:45:21.455970 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:56836.service - OpenSSH per-connection server daemon (10.0.0.1:56836). Sep 13 09:45:21.518486 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 56836 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:21.519832 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:21.524394 systemd-logind[1509]: New session 20 of user core. Sep 13 09:45:21.535753 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 09:45:21.664960 sshd[5495]: Connection closed by 10.0.0.1 port 56836 Sep 13 09:45:21.665260 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:21.669075 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:56836.service: Deactivated successfully. Sep 13 09:45:21.671994 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 09:45:21.672979 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Sep 13 09:45:21.674932 systemd-logind[1509]: Removed session 20. Sep 13 09:45:22.086766 containerd[1530]: time="2025-09-13T09:45:22.086725356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23d7ea78273193b5ecaac6485a94677e14ff58a3efa71ebe68e1aa9bbe647e8d\" id:\"cab6f785b3b1e090d3e7de1f80dc22e639aaac2e2439952d21775700d56a34ab\" pid:5519 exited_at:{seconds:1757756722 nanos:86290392}" Sep 13 09:45:25.686914 containerd[1530]: time="2025-09-13T09:45:25.686864294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1de42eae251e0162aa85607fb70d37510406ac110af9bc6aed936ab045243dd\" id:\"0f5fa5d065629fe685606ee87fcd9196a46f1d8978b3aae9024ca31fb8fad087\" pid:5540 exited_at:{seconds:1757756725 nanos:686595892}" Sep 13 09:45:26.678785 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:56842.service - OpenSSH per-connection server daemon (10.0.0.1:56842). Sep 13 09:45:26.739570 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 56842 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:26.741348 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:26.746719 systemd-logind[1509]: New session 21 of user core. Sep 13 09:45:26.752788 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 09:45:26.878975 sshd[5557]: Connection closed by 10.0.0.1 port 56842 Sep 13 09:45:26.879311 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:26.883463 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:56842.service: Deactivated successfully. Sep 13 09:45:26.887126 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 09:45:26.887844 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Sep 13 09:45:26.888899 systemd-logind[1509]: Removed session 21. Sep 13 09:45:31.890845 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:41432.service - OpenSSH per-connection server daemon (10.0.0.1:41432). Sep 13 09:45:31.937916 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 41432 ssh2: RSA SHA256:0vV6oKZET8luKNx2ju5v6UV3RLMjXdyvgHKHpQ6CRTE Sep 13 09:45:31.939256 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 09:45:31.943242 systemd-logind[1509]: New session 22 of user core. Sep 13 09:45:31.952717 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 09:45:32.083823 sshd[5575]: Connection closed by 10.0.0.1 port 41432 Sep 13 09:45:32.084386 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Sep 13 09:45:32.087855 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:41432.service: Deactivated successfully. Sep 13 09:45:32.089709 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 09:45:32.090490 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Sep 13 09:45:32.091959 systemd-logind[1509]: Removed session 22.