Sep 12 17:10:31.770252 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:10:31.770273 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:10:31.770283 kernel: KASLR enabled Sep 12 17:10:31.770288 kernel: efi: EFI v2.7 by EDK II Sep 12 17:10:31.770294 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 17:10:31.770299 kernel: random: crng init done Sep 12 17:10:31.770306 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:10:31.770312 kernel: secureboot: Secure boot enabled Sep 12 17:10:31.770317 kernel: ACPI: Early table checksum verification disabled Sep 12 17:10:31.770325 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 17:10:31.770331 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:10:31.770336 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770342 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770348 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770355 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770362 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770369 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770403 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770411 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770417 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:10:31.770423 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:10:31.770429 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:10:31.770435 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:10:31.770441 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 12 17:10:31.770447 kernel: Zone ranges: Sep 12 17:10:31.770455 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:10:31.770461 kernel: DMA32 empty Sep 12 17:10:31.770467 kernel: Normal empty Sep 12 17:10:31.770473 kernel: Device empty Sep 12 17:10:31.770479 kernel: Movable zone start for each node Sep 12 17:10:31.770490 kernel: Early memory node ranges Sep 12 17:10:31.770496 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 17:10:31.770502 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 17:10:31.770508 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 17:10:31.770514 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 17:10:31.770521 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 17:10:31.770527 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 17:10:31.770534 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 17:10:31.770540 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 17:10:31.770546 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:10:31.770555 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:10:31.770561 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:10:31.770567 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 17:10:31.770574 kernel: psci: probing for conduit method from ACPI. Sep 12 17:10:31.770582 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:10:31.770589 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:10:31.770595 kernel: psci: Trusted OS migration not required Sep 12 17:10:31.770602 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:10:31.770609 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:10:31.770615 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:10:31.770621 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:10:31.770628 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:10:31.770634 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:10:31.770643 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:10:31.770649 kernel: CPU features: detected: Spectre-v4 Sep 12 17:10:31.770656 kernel: CPU features: detected: Spectre-BHB Sep 12 17:10:31.770663 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:10:31.770669 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:10:31.770676 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:10:31.770682 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:10:31.770688 kernel: alternatives: applying boot alternatives Sep 12 17:10:31.770696 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:10:31.770703 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:10:31.770709 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:10:31.770724 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:10:31.770732 kernel: Fallback order for Node 0: 0 Sep 12 17:10:31.770738 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 17:10:31.770745 kernel: Policy zone: DMA Sep 12 17:10:31.770751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:10:31.770757 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 17:10:31.770764 kernel: software IO TLB: area num 4. Sep 12 17:10:31.770770 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 17:10:31.770777 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 17:10:31.770783 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:10:31.770790 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:10:31.770797 kernel: rcu: RCU event tracing is enabled. Sep 12 17:10:31.770805 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:10:31.770811 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:10:31.770818 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:10:31.770824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:10:31.770830 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:10:31.770837 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:10:31.770843 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:10:31.770850 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:10:31.770856 kernel: GICv3: 256 SPIs implemented Sep 12 17:10:31.770863 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:10:31.770869 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:10:31.770877 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:10:31.770883 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:10:31.770890 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:10:31.770896 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:10:31.770903 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:10:31.770909 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:10:31.770916 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 17:10:31.770922 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 17:10:31.770929 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:10:31.770935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:10:31.770942 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:10:31.770948 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:10:31.770956 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:10:31.770963 kernel: arm-pv: using stolen time PV Sep 12 17:10:31.770969 kernel: Console: colour dummy device 80x25 Sep 12 17:10:31.770976 kernel: ACPI: Core revision 20240827 Sep 12 17:10:31.770983 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:10:31.770989 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:10:31.770996 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:10:31.771003 kernel: landlock: Up and running. Sep 12 17:10:31.771009 kernel: SELinux: Initializing. Sep 12 17:10:31.771017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:10:31.771024 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:10:31.771031 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:10:31.771038 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:10:31.771044 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:10:31.771051 kernel: Remapping and enabling EFI services. Sep 12 17:10:31.771058 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:10:31.771065 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:10:31.771071 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:10:31.771079 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 17:10:31.771091 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:10:31.771098 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:10:31.771106 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:10:31.771113 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:10:31.771120 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 17:10:31.771127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:10:31.771133 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:10:31.771141 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:10:31.771149 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:10:31.771156 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 17:10:31.771163 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:10:31.771170 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:10:31.771177 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:10:31.771184 kernel: SMP: Total of 4 processors activated. Sep 12 17:10:31.771190 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:10:31.771198 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:10:31.771205 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:10:31.771213 kernel: CPU features: detected: Common not Private translations Sep 12 17:10:31.771221 kernel: CPU features: detected: CRC32 instructions Sep 12 17:10:31.771228 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:10:31.771235 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:10:31.771242 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:10:31.771250 kernel: CPU features: detected: Privileged Access Never Sep 12 17:10:31.771257 kernel: CPU features: detected: RAS Extension Support Sep 12 17:10:31.771264 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:10:31.771271 kernel: alternatives: applying system-wide alternatives Sep 12 17:10:31.771279 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 17:10:31.771287 kernel: Memory: 2422436K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 127516K reserved, 16384K cma-reserved) Sep 12 17:10:31.771306 kernel: devtmpfs: initialized Sep 12 17:10:31.771313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:10:31.771320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:10:31.771327 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:10:31.771334 kernel: 0 pages in range for non-PLT usage Sep 12 17:10:31.771341 kernel: 508576 pages in range for PLT usage Sep 12 17:10:31.771348 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:10:31.771356 kernel: SMBIOS 3.0.0 present. Sep 12 17:10:31.771363 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 17:10:31.771370 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:10:31.771384 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:10:31.771391 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:10:31.771398 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:10:31.771405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:10:31.771412 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:10:31.771420 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Sep 12 17:10:31.771428 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:10:31.771435 kernel: cpuidle: using governor menu Sep 12 17:10:31.771442 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:10:31.771450 kernel: ASID allocator initialised with 32768 entries Sep 12 17:10:31.771457 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:10:31.771463 kernel: Serial: AMBA PL011 UART driver Sep 12 17:10:31.771470 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:10:31.771477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:10:31.771486 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:10:31.771493 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:10:31.771500 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:10:31.771507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:10:31.771514 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:10:31.771521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:10:31.771528 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:10:31.771535 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:10:31.771543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:10:31.771550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:10:31.771558 kernel: ACPI: Interpreter enabled Sep 12 17:10:31.771565 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:10:31.771571 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:10:31.771578 kernel: ACPI: CPU0 has been hot-added Sep 12 17:10:31.771585 kernel: ACPI: CPU1 has been hot-added Sep 12 17:10:31.771592 kernel: ACPI: CPU2 has been hot-added Sep 12 17:10:31.771599 kernel: ACPI: CPU3 has been hot-added Sep 12 17:10:31.771606 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:10:31.771613 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:10:31.771622 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:10:31.771758 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:10:31.771824 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:10:31.771883 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:10:31.771942 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:10:31.772000 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:10:31.772010 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:10:31.772020 kernel: PCI host bridge to bus 0000:00 Sep 12 17:10:31.772092 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:10:31.772159 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:10:31.772212 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:10:31.772286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:10:31.772362 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:10:31.772473 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:10:31.772541 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 17:10:31.772601 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 17:10:31.772658 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:10:31.772722 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 17:10:31.772788 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 17:10:31.772848 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 17:10:31.772907 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:10:31.772959 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:10:31.773011 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:10:31.773026 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:10:31.773033 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:10:31.773040 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:10:31.773047 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:10:31.773054 kernel: iommu: Default domain type: Translated Sep 12 17:10:31.773064 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:10:31.773071 kernel: efivars: Registered efivars operations Sep 12 17:10:31.773078 kernel: vgaarb: loaded Sep 12 17:10:31.773085 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:10:31.773092 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:10:31.773099 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:10:31.773106 kernel: pnp: PnP ACPI init Sep 12 17:10:31.773171 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:10:31.773181 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:10:31.773190 kernel: NET: Registered PF_INET protocol family Sep 12 17:10:31.773197 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:10:31.773204 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:10:31.773211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:10:31.773218 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:10:31.773225 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:10:31.773232 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:10:31.773239 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:10:31.773246 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:10:31.773255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:10:31.773262 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:10:31.773268 kernel: kvm [1]: HYP mode not available Sep 12 17:10:31.773276 kernel: Initialise system trusted keyrings Sep 12 17:10:31.773283 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:10:31.773290 kernel: Key type asymmetric registered Sep 12 17:10:31.773297 kernel: Asymmetric key parser 'x509' registered Sep 12 17:10:31.773304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:10:31.773311 kernel: io scheduler mq-deadline registered Sep 12 17:10:31.773319 kernel: io scheduler kyber registered Sep 12 17:10:31.773327 kernel: io scheduler bfq registered Sep 12 17:10:31.773334 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:10:31.773341 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:10:31.773349 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:10:31.773436 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:10:31.773447 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:10:31.773454 kernel: thunder_xcv, ver 1.0 Sep 12 17:10:31.773461 kernel: thunder_bgx, ver 1.0 Sep 12 17:10:31.773471 kernel: nicpf, ver 1.0 Sep 12 17:10:31.773478 kernel: nicvf, ver 1.0 Sep 12 17:10:31.773559 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:10:31.773626 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:10:31 UTC (1757697031) Sep 12 17:10:31.773636 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:10:31.773643 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:10:31.773653 kernel: watchdog: NMI not fully supported Sep 12 17:10:31.773660 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:10:31.773669 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:10:31.773678 kernel: Segment Routing with IPv6 Sep 12 17:10:31.773687 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:10:31.773694 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:10:31.773701 kernel: Key type dns_resolver registered Sep 12 17:10:31.773709 kernel: registered taskstats version 1 Sep 12 17:10:31.773728 kernel: Loading compiled-in X.509 certificates Sep 12 17:10:31.773739 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:10:31.773746 kernel: Demotion targets for Node 0: null Sep 12 17:10:31.773755 kernel: Key type .fscrypt registered Sep 12 17:10:31.773763 kernel: Key type fscrypt-provisioning registered Sep 12 17:10:31.773769 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:10:31.773776 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:10:31.773783 kernel: ima: No architecture policies found Sep 12 17:10:31.773790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:10:31.773797 kernel: clk: Disabling unused clocks Sep 12 17:10:31.773804 kernel: PM: genpd: Disabling unused power domains Sep 12 17:10:31.773811 kernel: Warning: unable to open an initial console. Sep 12 17:10:31.773820 kernel: Freeing unused kernel memory: 38912K Sep 12 17:10:31.773827 kernel: Run /init as init process Sep 12 17:10:31.773834 kernel: with arguments: Sep 12 17:10:31.773840 kernel: /init Sep 12 17:10:31.773847 kernel: with environment: Sep 12 17:10:31.773854 kernel: HOME=/ Sep 12 17:10:31.773860 kernel: TERM=linux Sep 12 17:10:31.773867 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:10:31.773875 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:10:31.773887 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:10:31.773895 systemd[1]: Detected virtualization kvm. Sep 12 17:10:31.773902 systemd[1]: Detected architecture arm64. Sep 12 17:10:31.773909 systemd[1]: Running in initrd. Sep 12 17:10:31.773917 systemd[1]: No hostname configured, using default hostname. Sep 12 17:10:31.773925 systemd[1]: Hostname set to . Sep 12 17:10:31.773933 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:10:31.773942 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:10:31.773950 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:10:31.773957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:10:31.773965 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:10:31.773973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:10:31.773980 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:10:31.773989 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:10:31.773999 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:10:31.774006 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:10:31.774014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:10:31.774021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:10:31.774029 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:10:31.774037 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:10:31.774044 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:10:31.774052 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:10:31.774061 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:10:31.774068 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:10:31.774076 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:10:31.774083 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:10:31.774091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:10:31.774099 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:10:31.774106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:10:31.774113 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:10:31.774121 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:10:31.774129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:10:31.774137 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:10:31.774145 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:10:31.774152 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:10:31.774160 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:10:31.774167 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:10:31.774175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:10:31.774182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:10:31.774193 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:10:31.774200 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:10:31.774208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:10:31.774233 systemd-journald[243]: Collecting audit messages is disabled. Sep 12 17:10:31.774254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:31.774262 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:10:31.774270 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:10:31.774279 systemd-journald[243]: Journal started Sep 12 17:10:31.774297 systemd-journald[243]: Runtime Journal (/run/log/journal/2e046a44d27340009c04068ea07c1677) is 6M, max 48.5M, 42.4M free. Sep 12 17:10:31.763034 systemd-modules-load[245]: Inserted module 'overlay' Sep 12 17:10:31.778558 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 12 17:10:31.780761 kernel: Bridge firewalling registered Sep 12 17:10:31.780786 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:10:31.790507 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:10:31.791689 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:10:31.796028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:10:31.797555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:10:31.804250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:10:31.811112 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:10:31.813265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:10:31.815448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:10:31.815929 systemd-tmpfiles[278]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:10:31.818796 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:10:31.820596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:10:31.831755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:10:31.845474 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:10:31.861370 systemd-resolved[288]: Positive Trust Anchors: Sep 12 17:10:31.861451 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:10:31.861483 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:10:31.866485 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 12 17:10:31.867460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:10:31.871396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:10:31.923405 kernel: SCSI subsystem initialized Sep 12 17:10:31.928392 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:10:31.935417 kernel: iscsi: registered transport (tcp) Sep 12 17:10:31.948410 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:10:31.948429 kernel: QLogic iSCSI HBA Driver Sep 12 17:10:31.965179 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:10:31.980457 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:10:31.981918 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:10:32.031250 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:10:32.033653 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:10:32.100435 kernel: raid6: neonx8 gen() 14368 MB/s Sep 12 17:10:32.117405 kernel: raid6: neonx4 gen() 15773 MB/s Sep 12 17:10:32.134408 kernel: raid6: neonx2 gen() 13233 MB/s Sep 12 17:10:32.151398 kernel: raid6: neonx1 gen() 10456 MB/s Sep 12 17:10:32.168400 kernel: raid6: int64x8 gen() 6912 MB/s Sep 12 17:10:32.185401 kernel: raid6: int64x4 gen() 7346 MB/s Sep 12 17:10:32.202399 kernel: raid6: int64x2 gen() 6099 MB/s Sep 12 17:10:32.219400 kernel: raid6: int64x1 gen() 5050 MB/s Sep 12 17:10:32.219419 kernel: raid6: using algorithm neonx4 gen() 15773 MB/s Sep 12 17:10:32.236407 kernel: raid6: .... xor() 12355 MB/s, rmw enabled Sep 12 17:10:32.236426 kernel: raid6: using neon recovery algorithm Sep 12 17:10:32.241507 kernel: xor: measuring software checksum speed Sep 12 17:10:32.241533 kernel: 8regs : 21641 MB/sec Sep 12 17:10:32.242690 kernel: 32regs : 21699 MB/sec Sep 12 17:10:32.242704 kernel: arm64_neon : 27946 MB/sec Sep 12 17:10:32.242721 kernel: xor: using function: arm64_neon (27946 MB/sec) Sep 12 17:10:32.295412 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:10:32.301621 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:10:32.303847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:10:32.333764 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 12 17:10:32.338033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:10:32.340459 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:10:32.368134 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 12 17:10:32.391452 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:10:32.393679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:10:32.460837 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:10:32.463295 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:10:32.511304 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:10:32.515887 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:10:32.520746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:10:32.520800 kernel: GPT:9289727 != 19775487 Sep 12 17:10:32.523549 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:10:32.523599 kernel: GPT:9289727 != 19775487 Sep 12 17:10:32.524679 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:10:32.524713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:10:32.525938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:10:32.526018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:32.534492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:10:32.536181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:10:32.557606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:10:32.569264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:10:32.571422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:10:32.573349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:32.590402 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:10:32.591328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:10:32.599504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:10:32.600497 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:10:32.602070 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:10:32.603799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:10:32.606155 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:10:32.607938 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:10:32.625600 disk-uuid[589]: Primary Header is updated. Sep 12 17:10:32.625600 disk-uuid[589]: Secondary Entries is updated. Sep 12 17:10:32.625600 disk-uuid[589]: Secondary Header is updated. Sep 12 17:10:32.630392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:10:32.631618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:10:33.637417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:10:33.639028 disk-uuid[593]: The operation has completed successfully. Sep 12 17:10:33.662789 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:10:33.662884 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:10:33.689903 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:10:33.715184 sh[611]: Success Sep 12 17:10:33.729960 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:10:33.730015 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:10:33.730025 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:10:33.737461 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:10:33.766254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:10:33.768909 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:10:33.792949 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:10:33.800399 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (623) Sep 12 17:10:33.804392 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:10:33.804420 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:10:33.808404 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:10:33.808425 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:10:33.809847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:10:33.810764 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:10:33.812002 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:10:33.812782 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:10:33.815640 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:10:33.837288 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 12 17:10:33.837342 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:10:33.838391 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:10:33.840399 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:10:33.840442 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:10:33.845421 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:10:33.846284 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:10:33.848501 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:10:33.926348 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:10:33.930572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:10:33.972005 ignition[703]: Ignition 2.21.0 Sep 12 17:10:33.972812 ignition[703]: Stage: fetch-offline Sep 12 17:10:33.972861 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:10:33.972869 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:10:33.973053 ignition[703]: parsed url from cmdline: "" Sep 12 17:10:33.974979 systemd-networkd[804]: lo: Link UP Sep 12 17:10:33.973056 ignition[703]: no config URL provided Sep 12 17:10:33.974982 systemd-networkd[804]: lo: Gained carrier Sep 12 17:10:33.973060 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:10:33.975670 systemd-networkd[804]: Enumeration completed Sep 12 17:10:33.973067 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:10:33.975799 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:10:33.973163 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 12 17:10:33.976057 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:33.973168 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:10:33.976062 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:10:33.985717 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 12 17:10:33.977162 systemd-networkd[804]: eth0: Link UP Sep 12 17:10:33.977490 systemd-networkd[804]: eth0: Gained carrier Sep 12 17:10:33.977529 systemd[1]: Reached target network.target - Network. Sep 12 17:10:33.977545 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:34.001437 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:10:34.040789 ignition[703]: parsing config with SHA512: cd6908ccba8d839f0fc7a7fe4ebcee0c5c83704dba8def0fd1e83d036047cd4fed6b2b5b026590b7aa44690986099a97f05817fa55e95b232964eb01708fec7e Sep 12 17:10:34.045196 unknown[703]: fetched base config from "system" Sep 12 17:10:34.045208 unknown[703]: fetched user config from "qemu" Sep 12 17:10:34.045646 ignition[703]: fetch-offline: fetch-offline passed Sep 12 17:10:34.045730 ignition[703]: Ignition finished successfully Sep 12 17:10:34.048150 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:10:34.049571 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:10:34.050417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:10:34.077580 ignition[812]: Ignition 2.21.0 Sep 12 17:10:34.077599 ignition[812]: Stage: kargs Sep 12 17:10:34.077750 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:10:34.077761 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:10:34.080189 ignition[812]: kargs: kargs passed Sep 12 17:10:34.080259 ignition[812]: Ignition finished successfully Sep 12 17:10:34.082322 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:10:34.086484 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:10:34.110524 ignition[820]: Ignition 2.21.0 Sep 12 17:10:34.110539 ignition[820]: Stage: disks Sep 12 17:10:34.110711 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:10:34.110720 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:10:34.111538 ignition[820]: disks: disks passed Sep 12 17:10:34.113319 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:10:34.111590 ignition[820]: Ignition finished successfully Sep 12 17:10:34.115659 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:10:34.116859 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:10:34.118274 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:10:34.119600 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:10:34.121038 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:10:34.123332 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:10:34.143881 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:10:34.214349 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:10:34.218123 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:10:34.297407 kernel: EXT4-fs (vda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:10:34.298140 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:10:34.299431 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:10:34.302449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:10:34.304599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:10:34.305648 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:10:34.305724 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:10:34.305750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:10:34.319255 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:10:34.321415 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:10:34.326401 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 12 17:10:34.328460 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:10:34.328489 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:10:34.332418 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:10:34.332472 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:10:34.334065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:10:34.366893 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:10:34.371195 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:10:34.375633 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:10:34.379148 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:10:34.457438 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:10:34.459240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:10:34.461433 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:10:34.489400 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:10:34.509574 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:10:34.521819 ignition[951]: INFO : Ignition 2.21.0 Sep 12 17:10:34.521819 ignition[951]: INFO : Stage: mount Sep 12 17:10:34.524803 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:10:34.524803 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:10:34.524803 ignition[951]: INFO : mount: mount passed Sep 12 17:10:34.524803 ignition[951]: INFO : Ignition finished successfully Sep 12 17:10:34.526342 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:10:34.528727 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:10:34.800682 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:10:34.802424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:10:34.820540 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Sep 12 17:10:34.820585 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:10:34.822404 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:10:34.824796 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:10:34.824817 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:10:34.826240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:10:34.856768 ignition[981]: INFO : Ignition 2.21.0 Sep 12 17:10:34.856768 ignition[981]: INFO : Stage: files Sep 12 17:10:34.858141 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:10:34.858141 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:10:34.858141 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:10:34.861406 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:10:34.861406 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:10:34.861406 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:10:34.865275 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:10:34.865275 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:10:34.865275 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:10:34.865275 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:10:34.861887 unknown[981]: wrote ssh authorized keys file for user: core Sep 12 17:10:34.935925 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:10:35.219785 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:10:35.219785 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:10:35.222859 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:10:35.232587 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:10:35.232587 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:10:35.232587 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:10:35.232587 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:10:35.232587 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:10:35.232587 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:10:35.618934 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:10:35.817151 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:10:35.817151 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:10:35.820518 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:10:35.836281 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:10:35.840367 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:10:35.842847 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:10:35.842847 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:10:35.842847 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:10:35.842847 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:10:35.842847 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:10:35.842847 ignition[981]: INFO : files: files passed Sep 12 17:10:35.842847 ignition[981]: INFO : Ignition finished successfully Sep 12 17:10:35.843606 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:10:35.846110 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:10:35.849550 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:10:35.860640 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:10:35.860753 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:10:35.863046 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:10:35.865356 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:10:35.865356 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:10:35.868191 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:10:35.867626 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:10:35.870695 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:10:35.873324 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:10:35.894510 systemd-networkd[804]: eth0: Gained IPv6LL Sep 12 17:10:35.909145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:10:35.909274 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:10:35.911207 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:10:35.913246 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:10:35.915108 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:10:35.916218 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:10:35.943430 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:10:35.946532 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:10:35.970169 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:10:35.971246 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:10:35.972953 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:10:35.974290 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:10:35.974434 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:10:35.976753 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:10:35.978254 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:10:35.979775 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:10:35.981271 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:10:35.982789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:10:35.984207 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:10:35.985886 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:10:35.987279 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:10:35.988928 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:10:35.990362 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:10:35.991998 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:10:35.993138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:10:35.993268 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:10:35.995105 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:10:35.996889 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:10:35.998310 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:10:36.002486 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:10:36.003473 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:10:36.003598 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:10:36.005896 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:10:36.006055 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:10:36.007562 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:10:36.008890 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:10:36.009042 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:10:36.010438 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:10:36.011680 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:10:36.013107 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:10:36.013196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:10:36.014887 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:10:36.014962 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:10:36.016264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:10:36.016407 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:10:36.017948 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:10:36.018053 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:10:36.020055 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:10:36.021025 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:10:36.021151 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:10:36.023478 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:10:36.024938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:10:36.025057 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:10:36.026423 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:10:36.026521 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:10:36.031285 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:10:36.033542 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:10:36.042106 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:10:36.049946 ignition[1037]: INFO : Ignition 2.21.0 Sep 12 17:10:36.049946 ignition[1037]: INFO : Stage: umount Sep 12 17:10:36.052928 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:10:36.052928 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:10:36.052928 ignition[1037]: INFO : umount: umount passed Sep 12 17:10:36.052928 ignition[1037]: INFO : Ignition finished successfully Sep 12 17:10:36.050335 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:10:36.050739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:10:36.053209 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:10:36.054443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:10:36.057289 systemd[1]: Stopped target network.target - Network. Sep 12 17:10:36.058308 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:10:36.058364 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:10:36.059769 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:10:36.059808 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:10:36.061098 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:10:36.061141 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:10:36.062396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:10:36.062433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:10:36.063867 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:10:36.063909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:10:36.065279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:10:36.066479 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:10:36.074203 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:10:36.074317 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:10:36.077485 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:10:36.077796 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:10:36.077836 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:10:36.080464 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:10:36.081600 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:10:36.081709 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:10:36.085111 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:10:36.085305 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:10:36.086664 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:10:36.086705 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:10:36.089031 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:10:36.090303 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:10:36.090359 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:10:36.092052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:10:36.092091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:10:36.093682 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:10:36.093726 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:10:36.095162 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:10:36.099268 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:10:36.113766 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:10:36.121562 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:10:36.124132 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:10:36.124217 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:10:36.125250 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:10:36.125280 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:10:36.126807 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:10:36.126854 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:10:36.129162 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:10:36.129204 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:10:36.131840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:10:36.131890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:10:36.134884 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:10:36.136234 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:10:36.136292 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:10:36.139334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:10:36.139387 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:10:36.142244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:10:36.142285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:36.145979 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:10:36.159535 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:10:36.164842 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:10:36.164945 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:10:36.166766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:10:36.168894 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:10:36.205916 systemd[1]: Switching root. Sep 12 17:10:36.238557 systemd-journald[243]: Journal stopped Sep 12 17:10:37.015702 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 12 17:10:37.015758 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:10:37.015772 kernel: SELinux: policy capability open_perms=1 Sep 12 17:10:37.015782 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:10:37.015791 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:10:37.015803 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:10:37.015817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:10:37.015829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:10:37.015839 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:10:37.015849 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:10:37.015858 kernel: audit: type=1403 audit(1757697036.418:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:10:37.015872 systemd[1]: Successfully loaded SELinux policy in 59.324ms. Sep 12 17:10:37.015887 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.383ms. Sep 12 17:10:37.015901 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:10:37.015912 systemd[1]: Detected virtualization kvm. Sep 12 17:10:37.015922 systemd[1]: Detected architecture arm64. Sep 12 17:10:37.015932 systemd[1]: Detected first boot. Sep 12 17:10:37.015942 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:10:37.015954 zram_generator::config[1082]: No configuration found. Sep 12 17:10:37.015966 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:10:37.015975 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:10:37.015985 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:10:37.015996 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:10:37.016008 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:10:37.016019 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:10:37.016029 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:10:37.016041 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:10:37.016051 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:10:37.016062 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:10:37.016072 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:10:37.016082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:10:37.016093 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:10:37.016103 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:10:37.016113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:10:37.016123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:10:37.016137 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:10:37.016147 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:10:37.016157 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:10:37.016167 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:10:37.016180 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:10:37.016191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:10:37.016201 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:10:37.016210 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:10:37.016221 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:10:37.016231 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:10:37.016244 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:10:37.016253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:10:37.016263 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:10:37.016273 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:10:37.016283 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:10:37.016293 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:10:37.016305 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:10:37.016318 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:10:37.016328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:10:37.016338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:10:37.016349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:10:37.016359 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:10:37.016369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:10:37.016431 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:10:37.016446 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:10:37.016458 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:10:37.016470 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:10:37.016480 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:10:37.016490 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:10:37.016501 systemd[1]: Reached target machines.target - Containers. Sep 12 17:10:37.016511 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:10:37.016522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:10:37.016532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:10:37.016542 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:10:37.016554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:10:37.016564 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:10:37.016574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:10:37.016584 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:10:37.016593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:10:37.016604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:10:37.016615 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:10:37.016625 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:10:37.016634 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:10:37.016646 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:10:37.016656 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:10:37.016675 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:10:37.016687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:10:37.016696 kernel: loop: module loaded Sep 12 17:10:37.016706 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:10:37.016716 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:10:37.016725 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:10:37.016738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:10:37.016747 kernel: fuse: init (API version 7.41) Sep 12 17:10:37.016757 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:10:37.016766 systemd[1]: Stopped verity-setup.service. Sep 12 17:10:37.016776 kernel: ACPI: bus type drm_connector registered Sep 12 17:10:37.016786 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:10:37.016796 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:10:37.016806 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:10:37.016816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:10:37.016849 systemd-journald[1150]: Collecting audit messages is disabled. Sep 12 17:10:37.016871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:10:37.016882 systemd-journald[1150]: Journal started Sep 12 17:10:37.016904 systemd-journald[1150]: Runtime Journal (/run/log/journal/2e046a44d27340009c04068ea07c1677) is 6M, max 48.5M, 42.4M free. Sep 12 17:10:36.808300 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:10:36.832492 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:10:36.832893 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:10:37.018965 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:10:37.019014 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:10:37.023401 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:10:37.024706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:10:37.026794 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:10:37.026995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:10:37.028288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:10:37.028484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:10:37.031053 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:10:37.031240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:10:37.032677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:10:37.032840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:10:37.034294 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:10:37.034469 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:10:37.035562 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:10:37.035740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:10:37.036997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:10:37.038277 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:10:37.039731 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:10:37.041117 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:10:37.054563 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:10:37.056734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:10:37.060525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:10:37.061392 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:10:37.061431 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:10:37.063132 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:10:37.070590 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:10:37.071732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:10:37.072961 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:10:37.075045 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:10:37.076150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:10:37.079532 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:10:37.080701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:10:37.081871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:10:37.084613 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:10:37.084910 systemd-journald[1150]: Time spent on flushing to /var/log/journal/2e046a44d27340009c04068ea07c1677 is 19.066ms for 881 entries. Sep 12 17:10:37.084910 systemd-journald[1150]: System Journal (/var/log/journal/2e046a44d27340009c04068ea07c1677) is 8M, max 195.6M, 187.6M free. Sep 12 17:10:37.159125 systemd-journald[1150]: Received client request to flush runtime journal. Sep 12 17:10:37.159160 kernel: loop0: detected capacity change from 0 to 203944 Sep 12 17:10:37.159172 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:10:37.159182 kernel: loop1: detected capacity change from 0 to 119320 Sep 12 17:10:37.159192 kernel: loop2: detected capacity change from 0 to 100608 Sep 12 17:10:37.087185 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:10:37.091415 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:10:37.092575 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:10:37.093657 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:10:37.113479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:10:37.145715 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:10:37.147206 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:10:37.149713 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:10:37.151804 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:10:37.154728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:10:37.161606 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:10:37.182328 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Sep 12 17:10:37.182348 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Sep 12 17:10:37.184081 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:10:37.187280 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:10:37.190486 kernel: loop3: detected capacity change from 0 to 203944 Sep 12 17:10:37.199150 kernel: loop4: detected capacity change from 0 to 119320 Sep 12 17:10:37.208411 kernel: loop5: detected capacity change from 0 to 100608 Sep 12 17:10:37.212497 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:10:37.212887 (sd-merge)[1220]: Merged extensions into '/usr'. Sep 12 17:10:37.218016 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:10:37.218034 systemd[1]: Reloading... Sep 12 17:10:37.275464 zram_generator::config[1248]: No configuration found. Sep 12 17:10:37.399603 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:10:37.421409 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:10:37.421648 systemd[1]: Reloading finished in 203 ms. Sep 12 17:10:37.450063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:10:37.451273 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:10:37.466725 systemd[1]: Starting ensure-sysext.service... Sep 12 17:10:37.468422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:10:37.477601 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:10:37.477615 systemd[1]: Reloading... Sep 12 17:10:37.484499 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:10:37.484847 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:10:37.485161 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:10:37.485468 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:10:37.486188 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:10:37.486505 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 12 17:10:37.486617 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 12 17:10:37.489429 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:10:37.489531 systemd-tmpfiles[1283]: Skipping /boot Sep 12 17:10:37.495274 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:10:37.495394 systemd-tmpfiles[1283]: Skipping /boot Sep 12 17:10:37.523406 zram_generator::config[1310]: No configuration found. Sep 12 17:10:37.651291 systemd[1]: Reloading finished in 173 ms. Sep 12 17:10:37.677026 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:10:37.682272 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:10:37.690279 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:10:37.692608 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:10:37.694468 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:10:37.696731 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:10:37.699523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:10:37.701568 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:10:37.708405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:10:37.709816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:10:37.714573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:10:37.723291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:10:37.724548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:10:37.724674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:10:37.725763 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:10:37.727619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:10:37.727799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:10:37.729624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:10:37.729774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:10:37.731371 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:10:37.731525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:10:37.734737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:10:37.742362 augenrules[1379]: No rules Sep 12 17:10:37.743565 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:10:37.745418 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:10:37.746167 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 12 17:10:37.746776 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:10:37.755428 systemd[1]: Finished ensure-sysext.service. Sep 12 17:10:37.758089 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:10:37.759000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:10:37.759978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:10:37.763538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:10:37.770539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:10:37.772353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:10:37.773226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:10:37.773273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:10:37.774931 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:10:37.778227 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:10:37.784028 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:10:37.784899 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:10:37.785207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:10:37.787176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:10:37.795190 augenrules[1387]: /sbin/augenrules: No change Sep 12 17:10:37.804903 augenrules[1438]: No rules Sep 12 17:10:37.815040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:10:37.818510 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:10:37.818699 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:10:37.819948 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:10:37.820095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:10:37.827341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:10:37.827543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:10:37.828741 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:10:37.830571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:10:37.832149 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:10:37.845112 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:10:37.848234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:10:37.849244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:10:37.849305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:10:37.889709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:10:37.891359 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:10:37.893724 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:10:37.916849 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:10:37.961698 systemd-resolved[1349]: Positive Trust Anchors: Sep 12 17:10:37.961728 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:10:37.961760 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:10:37.970140 systemd-resolved[1349]: Defaulting to hostname 'linux'. Sep 12 17:10:37.973932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:10:37.975247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:10:37.981288 systemd-networkd[1449]: lo: Link UP Sep 12 17:10:37.981592 systemd-networkd[1449]: lo: Gained carrier Sep 12 17:10:37.982511 systemd-networkd[1449]: Enumeration completed Sep 12 17:10:37.982720 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:10:37.983154 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:37.983222 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:10:37.983717 systemd[1]: Reached target network.target - Network. Sep 12 17:10:37.983965 systemd-networkd[1449]: eth0: Link UP Sep 12 17:10:37.984159 systemd-networkd[1449]: eth0: Gained carrier Sep 12 17:10:37.984234 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:37.986013 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:10:37.988501 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:10:38.007479 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:10:38.014815 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:10:38.016269 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:10:38.018770 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:10:37.611019 systemd-resolved[1349]: Clock change detected. Flushing caches. Sep 12 17:10:37.615094 systemd-journald[1150]: Time jumped backwards, rotating. Sep 12 17:10:37.611244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:10:37.611262 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:10:37.611314 systemd-timesyncd[1409]: Initial clock synchronization to Fri 2025-09-12 17:10:37.610973 UTC. Sep 12 17:10:37.656510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:37.657899 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:10:37.658985 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:10:37.660150 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:10:37.661336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:10:37.662307 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:10:37.663384 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:10:37.664563 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:10:37.664662 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:10:37.665472 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:10:37.667962 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:10:37.670145 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:10:37.672675 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:10:37.673823 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:10:37.674848 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:10:37.677943 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:10:37.679352 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:10:37.680782 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:10:37.681720 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:10:37.682490 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:10:37.683191 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:10:37.683223 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:10:37.684079 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:10:37.685850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:10:37.687472 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:10:37.689807 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:10:37.692197 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:10:37.692931 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:10:37.694108 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:10:37.697692 jq[1498]: false Sep 12 17:10:37.697210 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:10:37.698862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:10:37.702482 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:10:37.704796 extend-filesystems[1499]: Found /dev/vda6 Sep 12 17:10:37.705546 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:10:37.707820 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:10:37.708505 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:10:37.708971 extend-filesystems[1499]: Found /dev/vda9 Sep 12 17:10:37.711209 extend-filesystems[1499]: Checking size of /dev/vda9 Sep 12 17:10:37.711280 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:10:37.713915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:10:37.719161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:10:37.721014 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:10:37.721320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:10:37.723961 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:10:37.726171 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:10:37.727380 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:10:37.727564 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:10:37.728792 jq[1517]: true Sep 12 17:10:37.731375 extend-filesystems[1499]: Resized partition /dev/vda9 Sep 12 17:10:37.739504 update_engine[1511]: I20250912 17:10:37.739017 1511 main.cc:92] Flatcar Update Engine starting Sep 12 17:10:37.740080 tar[1523]: linux-arm64/helm Sep 12 17:10:37.741838 extend-filesystems[1533]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:10:37.743535 (ntainerd)[1529]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:10:37.749537 jq[1527]: true Sep 12 17:10:37.764500 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:10:37.776669 dbus-daemon[1496]: [system] SELinux support is enabled Sep 12 17:10:37.777980 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:10:37.782858 update_engine[1511]: I20250912 17:10:37.782776 1511 update_check_scheduler.cc:74] Next update check in 2m21s Sep 12 17:10:37.784668 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:10:37.784702 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:10:37.786379 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:10:37.786411 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:10:37.788083 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:10:37.792347 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:10:37.805132 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:10:37.817970 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:10:37.817970 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:10:37.817970 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:10:37.835349 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Sep 12 17:10:37.836017 bash[1555]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:10:37.818028 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:10:37.821284 systemd-logind[1509]: New seat seat0. Sep 12 17:10:37.823357 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:10:37.823596 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:10:37.826226 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:10:37.829296 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:10:37.832648 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:10:37.862038 locksmithd[1556]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:10:37.934916 containerd[1529]: time="2025-09-12T17:10:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:10:37.938137 containerd[1529]: time="2025-09-12T17:10:37.937740391Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:10:37.947610 containerd[1529]: time="2025-09-12T17:10:37.947560951Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" Sep 12 17:10:37.947610 containerd[1529]: time="2025-09-12T17:10:37.947604031Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:10:37.947689 containerd[1529]: time="2025-09-12T17:10:37.947621751Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:10:37.947895 containerd[1529]: time="2025-09-12T17:10:37.947859951Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:10:37.947895 containerd[1529]: time="2025-09-12T17:10:37.947885911Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:10:37.947931 containerd[1529]: time="2025-09-12T17:10:37.947917151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948046 containerd[1529]: time="2025-09-12T17:10:37.948025311Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948067 containerd[1529]: time="2025-09-12T17:10:37.948045471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948402 containerd[1529]: time="2025-09-12T17:10:37.948359791Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948456 containerd[1529]: time="2025-09-12T17:10:37.948385271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948483 containerd[1529]: time="2025-09-12T17:10:37.948469951Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948508 containerd[1529]: time="2025-09-12T17:10:37.948483831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948581 containerd[1529]: time="2025-09-12T17:10:37.948564991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948882 containerd[1529]: time="2025-09-12T17:10:37.948860631Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948962 containerd[1529]: time="2025-09-12T17:10:37.948901311Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:10:37.948984 containerd[1529]: time="2025-09-12T17:10:37.948963551Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:10:37.949020 containerd[1529]: time="2025-09-12T17:10:37.949004071Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:10:37.949479 containerd[1529]: time="2025-09-12T17:10:37.949449631Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:10:37.949614 containerd[1529]: time="2025-09-12T17:10:37.949593351Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:10:37.953412 containerd[1529]: time="2025-09-12T17:10:37.953369831Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:10:37.953471 containerd[1529]: time="2025-09-12T17:10:37.953444231Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:10:37.953471 containerd[1529]: time="2025-09-12T17:10:37.953466911Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:10:37.953502 containerd[1529]: time="2025-09-12T17:10:37.953479831Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:10:37.953502 containerd[1529]: time="2025-09-12T17:10:37.953491511Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:10:37.953531 containerd[1529]: time="2025-09-12T17:10:37.953504511Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:10:37.953531 containerd[1529]: time="2025-09-12T17:10:37.953523431Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:10:37.953573 containerd[1529]: time="2025-09-12T17:10:37.953535471Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:10:37.953573 containerd[1529]: time="2025-09-12T17:10:37.953548351Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:10:37.953573 containerd[1529]: time="2025-09-12T17:10:37.953558791Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:10:37.953573 containerd[1529]: time="2025-09-12T17:10:37.953568351Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:10:37.953628 containerd[1529]: time="2025-09-12T17:10:37.953579911Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:10:37.953714 containerd[1529]: time="2025-09-12T17:10:37.953691031Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:10:37.953747 containerd[1529]: time="2025-09-12T17:10:37.953726151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:10:37.953747 containerd[1529]: time="2025-09-12T17:10:37.953742071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:10:37.953781 containerd[1529]: time="2025-09-12T17:10:37.953752551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:10:37.953781 containerd[1529]: time="2025-09-12T17:10:37.953762551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:10:37.953781 containerd[1529]: time="2025-09-12T17:10:37.953772311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:10:37.953826 containerd[1529]: time="2025-09-12T17:10:37.953782831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:10:37.953826 containerd[1529]: time="2025-09-12T17:10:37.953793271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:10:37.953826 containerd[1529]: time="2025-09-12T17:10:37.953803391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:10:37.953826 containerd[1529]: time="2025-09-12T17:10:37.953813631Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:10:37.953826 containerd[1529]: time="2025-09-12T17:10:37.953823471Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:10:37.954025 containerd[1529]: time="2025-09-12T17:10:37.954009511Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:10:37.954051 containerd[1529]: time="2025-09-12T17:10:37.954029911Z" level=info msg="Start snapshots syncer" Sep 12 17:10:37.954051 containerd[1529]: time="2025-09-12T17:10:37.954048071Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:10:37.954757 containerd[1529]: time="2025-09-12T17:10:37.954652391Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:10:37.954852 containerd[1529]: time="2025-09-12T17:10:37.954760791Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:10:37.954911 containerd[1529]: time="2025-09-12T17:10:37.954888871Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:10:37.955059 containerd[1529]: time="2025-09-12T17:10:37.955039711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:10:37.955155 containerd[1529]: time="2025-09-12T17:10:37.955135311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:10:37.955182 containerd[1529]: time="2025-09-12T17:10:37.955169991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:10:37.955249 containerd[1529]: time="2025-09-12T17:10:37.955232751Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:10:37.955266 containerd[1529]: time="2025-09-12T17:10:37.955256351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:10:37.955283 containerd[1529]: time="2025-09-12T17:10:37.955269671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:10:37.955283 containerd[1529]: time="2025-09-12T17:10:37.955280631Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:10:37.955320 containerd[1529]: time="2025-09-12T17:10:37.955304071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:10:37.955373 containerd[1529]: time="2025-09-12T17:10:37.955315951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:10:37.955399 containerd[1529]: time="2025-09-12T17:10:37.955378591Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:10:37.955496 containerd[1529]: time="2025-09-12T17:10:37.955479711Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:10:37.955514 containerd[1529]: time="2025-09-12T17:10:37.955503351Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:10:37.955569 containerd[1529]: time="2025-09-12T17:10:37.955514031Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:10:37.955569 containerd[1529]: time="2025-09-12T17:10:37.955523871Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:10:37.955569 containerd[1529]: time="2025-09-12T17:10:37.955531431Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:10:37.955569 containerd[1529]: time="2025-09-12T17:10:37.955564031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:10:37.955629 containerd[1529]: time="2025-09-12T17:10:37.955577631Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:10:37.955758 containerd[1529]: time="2025-09-12T17:10:37.955695431Z" level=info msg="runtime interface created" Sep 12 17:10:37.955758 containerd[1529]: time="2025-09-12T17:10:37.955755231Z" level=info msg="created NRI interface" Sep 12 17:10:37.955794 containerd[1529]: time="2025-09-12T17:10:37.955766071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:10:37.955794 containerd[1529]: time="2025-09-12T17:10:37.955778711Z" level=info msg="Connect containerd service" Sep 12 17:10:37.955824 containerd[1529]: time="2025-09-12T17:10:37.955806911Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:10:37.957285 containerd[1529]: time="2025-09-12T17:10:37.957200031Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:10:38.004780 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:10:38.026163 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:10:38.029372 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:10:38.030343 containerd[1529]: time="2025-09-12T17:10:38.030282351Z" level=info msg="Start subscribing containerd event" Sep 12 17:10:38.030415 containerd[1529]: time="2025-09-12T17:10:38.030358111Z" level=info msg="Start recovering state" Sep 12 17:10:38.030465 containerd[1529]: time="2025-09-12T17:10:38.030446831Z" level=info msg="Start event monitor" Sep 12 17:10:38.030488 containerd[1529]: time="2025-09-12T17:10:38.030473391Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:10:38.030488 containerd[1529]: time="2025-09-12T17:10:38.030482831Z" level=info msg="Start streaming server" Sep 12 17:10:38.030541 containerd[1529]: time="2025-09-12T17:10:38.030492031Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:10:38.030541 containerd[1529]: time="2025-09-12T17:10:38.030500111Z" level=info msg="runtime interface starting up..." Sep 12 17:10:38.030541 containerd[1529]: time="2025-09-12T17:10:38.030505831Z" level=info msg="starting plugins..." Sep 12 17:10:38.030541 containerd[1529]: time="2025-09-12T17:10:38.030521431Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:10:38.032369 containerd[1529]: time="2025-09-12T17:10:38.032333911Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:10:38.032437 containerd[1529]: time="2025-09-12T17:10:38.032426991Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:10:38.033172 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:10:38.034190 containerd[1529]: time="2025-09-12T17:10:38.033060711Z" level=info msg="containerd successfully booted in 0.098486s" Sep 12 17:10:38.045457 tar[1523]: linux-arm64/LICENSE Sep 12 17:10:38.045545 tar[1523]: linux-arm64/README.md Sep 12 17:10:38.047378 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:10:38.047583 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:10:38.050498 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:10:38.061328 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:10:38.064767 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:10:38.067765 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:10:38.069753 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:10:38.070840 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:10:39.323244 systemd-networkd[1449]: eth0: Gained IPv6LL Sep 12 17:10:39.325706 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:10:39.327187 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:10:39.329213 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:10:39.331401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:39.333097 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:10:39.356558 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:10:39.356812 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:10:39.360192 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:10:39.362829 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:10:39.882001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:39.883522 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:10:39.884943 systemd[1]: Startup finished in 2.088s (kernel) + 4.806s (initrd) + 3.937s (userspace) = 10.832s. Sep 12 17:10:39.885711 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:40.263804 kubelet[1630]: E0912 17:10:40.263706 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:40.266286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:40.266437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:40.267524 systemd[1]: kubelet.service: Consumed 771ms CPU time, 257M memory peak. Sep 12 17:10:44.023833 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:10:44.024987 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:39148.service - OpenSSH per-connection server daemon (10.0.0.1:39148). Sep 12 17:10:44.086783 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 39148 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:44.088833 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:44.095203 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:10:44.096170 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:10:44.103345 systemd-logind[1509]: New session 1 of user core. Sep 12 17:10:44.121273 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:10:44.123921 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:10:44.147394 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:10:44.149701 systemd-logind[1509]: New session c1 of user core. Sep 12 17:10:44.264389 systemd[1648]: Queued start job for default target default.target. Sep 12 17:10:44.280230 systemd[1648]: Created slice app.slice - User Application Slice. Sep 12 17:10:44.280260 systemd[1648]: Reached target paths.target - Paths. Sep 12 17:10:44.280298 systemd[1648]: Reached target timers.target - Timers. Sep 12 17:10:44.281622 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:10:44.291777 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:10:44.291850 systemd[1648]: Reached target sockets.target - Sockets. Sep 12 17:10:44.291890 systemd[1648]: Reached target basic.target - Basic System. Sep 12 17:10:44.291918 systemd[1648]: Reached target default.target - Main User Target. Sep 12 17:10:44.291943 systemd[1648]: Startup finished in 136ms. Sep 12 17:10:44.292112 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:10:44.293668 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:10:44.349973 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). Sep 12 17:10:44.415271 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:44.416717 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:44.421167 systemd-logind[1509]: New session 2 of user core. Sep 12 17:10:44.431357 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:10:44.483214 sshd[1662]: Connection closed by 10.0.0.1 port 39156 Sep 12 17:10:44.483490 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:44.493407 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:39156.service: Deactivated successfully. Sep 12 17:10:44.495063 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:10:44.496698 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:10:44.499838 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:39172.service - OpenSSH per-connection server daemon (10.0.0.1:39172). Sep 12 17:10:44.500578 systemd-logind[1509]: Removed session 2. Sep 12 17:10:44.567804 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 39172 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:44.569157 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:44.574188 systemd-logind[1509]: New session 3 of user core. Sep 12 17:10:44.588362 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:10:44.636701 sshd[1671]: Connection closed by 10.0.0.1 port 39172 Sep 12 17:10:44.636621 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:44.647418 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:39172.service: Deactivated successfully. Sep 12 17:10:44.649640 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:10:44.651205 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:10:44.654100 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:39184.service - OpenSSH per-connection server daemon (10.0.0.1:39184). Sep 12 17:10:44.655194 systemd-logind[1509]: Removed session 3. Sep 12 17:10:44.712509 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 39184 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:44.716401 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:44.721184 systemd-logind[1509]: New session 4 of user core. Sep 12 17:10:44.737348 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:10:44.793095 sshd[1680]: Connection closed by 10.0.0.1 port 39184 Sep 12 17:10:44.793552 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:44.802067 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:39184.service: Deactivated successfully. Sep 12 17:10:44.804540 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:10:44.805213 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:10:44.807608 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:39200.service - OpenSSH per-connection server daemon (10.0.0.1:39200). Sep 12 17:10:44.811223 systemd-logind[1509]: Removed session 4. Sep 12 17:10:44.870201 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 39200 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:44.871814 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:44.876828 systemd-logind[1509]: New session 5 of user core. Sep 12 17:10:44.892331 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:10:44.951440 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:10:44.951756 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:44.981102 sudo[1690]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:44.983045 sshd[1689]: Connection closed by 10.0.0.1 port 39200 Sep 12 17:10:44.986331 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:44.999616 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:39200.service: Deactivated successfully. Sep 12 17:10:45.002597 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:10:45.003387 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:10:45.005827 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:39212.service - OpenSSH per-connection server daemon (10.0.0.1:39212). Sep 12 17:10:45.006568 systemd-logind[1509]: Removed session 5. Sep 12 17:10:45.076204 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 39212 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:45.077917 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:45.082642 systemd-logind[1509]: New session 6 of user core. Sep 12 17:10:45.095328 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:10:45.148492 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:10:45.148764 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:45.358457 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:45.363615 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:10:45.364103 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:45.374272 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:10:45.429458 augenrules[1724]: No rules Sep 12 17:10:45.429196 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:10:45.431209 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:10:45.434575 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:45.436613 sshd[1700]: Connection closed by 10.0.0.1 port 39212 Sep 12 17:10:45.436481 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:45.450290 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:39212.service: Deactivated successfully. Sep 12 17:10:45.452341 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:10:45.453900 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:10:45.457847 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:39214.service - OpenSSH per-connection server daemon (10.0.0.1:39214). Sep 12 17:10:45.459421 systemd-logind[1509]: Removed session 6. Sep 12 17:10:45.528574 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 39214 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:10:45.530608 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:45.536043 systemd-logind[1509]: New session 7 of user core. Sep 12 17:10:45.544353 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:10:45.596206 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:10:45.596478 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:45.929232 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:10:45.948532 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:10:46.151516 dockerd[1757]: time="2025-09-12T17:10:46.151445151Z" level=info msg="Starting up" Sep 12 17:10:46.152371 dockerd[1757]: time="2025-09-12T17:10:46.152346431Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:10:46.163901 dockerd[1757]: time="2025-09-12T17:10:46.163859591Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:10:46.196188 dockerd[1757]: time="2025-09-12T17:10:46.195878471Z" level=info msg="Loading containers: start." Sep 12 17:10:46.204189 kernel: Initializing XFRM netlink socket Sep 12 17:10:46.437093 systemd-networkd[1449]: docker0: Link UP Sep 12 17:10:46.441074 dockerd[1757]: time="2025-09-12T17:10:46.441034191Z" level=info msg="Loading containers: done." Sep 12 17:10:46.457356 dockerd[1757]: time="2025-09-12T17:10:46.457175111Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:10:46.457356 dockerd[1757]: time="2025-09-12T17:10:46.457330151Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:10:46.457502 dockerd[1757]: time="2025-09-12T17:10:46.457423071Z" level=info msg="Initializing buildkit" Sep 12 17:10:46.494926 dockerd[1757]: time="2025-09-12T17:10:46.494753271Z" level=info msg="Completed buildkit initialization" Sep 12 17:10:46.500328 dockerd[1757]: time="2025-09-12T17:10:46.500253431Z" level=info msg="Daemon has completed initialization" Sep 12 17:10:46.500449 dockerd[1757]: time="2025-09-12T17:10:46.500327591Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:10:46.500538 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:10:47.117216 containerd[1529]: time="2025-09-12T17:10:47.116974151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:10:47.747902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795386920.mount: Deactivated successfully. Sep 12 17:10:48.986181 containerd[1529]: time="2025-09-12T17:10:48.986010951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:49.036913 containerd[1529]: time="2025-09-12T17:10:49.036850791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 12 17:10:49.051780 containerd[1529]: time="2025-09-12T17:10:49.051716031Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:49.068675 containerd[1529]: time="2025-09-12T17:10:49.068598591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:49.069588 containerd[1529]: time="2025-09-12T17:10:49.069535831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.952518s" Sep 12 17:10:49.069588 containerd[1529]: time="2025-09-12T17:10:49.069577631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:10:49.070985 containerd[1529]: time="2025-09-12T17:10:49.070953591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:10:50.201965 containerd[1529]: time="2025-09-12T17:10:50.201901471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:50.202494 containerd[1529]: time="2025-09-12T17:10:50.202458311Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 12 17:10:50.203343 containerd[1529]: time="2025-09-12T17:10:50.203276791Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:50.206364 containerd[1529]: time="2025-09-12T17:10:50.206307471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:50.207689 containerd[1529]: time="2025-09-12T17:10:50.207646951Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.13665976s" Sep 12 17:10:50.207689 containerd[1529]: time="2025-09-12T17:10:50.207686271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:10:50.208736 containerd[1529]: time="2025-09-12T17:10:50.208503351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:10:50.480228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:10:50.481820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:50.619999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:50.624085 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:50.701881 kubelet[2044]: E0912 17:10:50.701792 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:50.704857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:50.704985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:50.705652 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.4M memory peak. Sep 12 17:10:51.506527 containerd[1529]: time="2025-09-12T17:10:51.506443751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:51.507379 containerd[1529]: time="2025-09-12T17:10:51.507339711Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 12 17:10:51.508205 containerd[1529]: time="2025-09-12T17:10:51.508089671Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:51.510587 containerd[1529]: time="2025-09-12T17:10:51.510522591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:51.511696 containerd[1529]: time="2025-09-12T17:10:51.511500551Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.3029606s" Sep 12 17:10:51.511696 containerd[1529]: time="2025-09-12T17:10:51.511539351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:10:51.512137 containerd[1529]: time="2025-09-12T17:10:51.512097351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:10:52.451526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114230929.mount: Deactivated successfully. Sep 12 17:10:52.748794 containerd[1529]: time="2025-09-12T17:10:52.748367351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:52.748794 containerd[1529]: time="2025-09-12T17:10:52.748735751Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 12 17:10:52.749603 containerd[1529]: time="2025-09-12T17:10:52.749564711Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:52.751802 containerd[1529]: time="2025-09-12T17:10:52.751688751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:52.752342 containerd[1529]: time="2025-09-12T17:10:52.752308911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.24016144s" Sep 12 17:10:52.752420 containerd[1529]: time="2025-09-12T17:10:52.752347591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:10:52.752857 containerd[1529]: time="2025-09-12T17:10:52.752816871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:10:53.417691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881019943.mount: Deactivated successfully. Sep 12 17:10:54.547392 containerd[1529]: time="2025-09-12T17:10:54.547332631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:54.605112 containerd[1529]: time="2025-09-12T17:10:54.605043671Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 17:10:54.631655 containerd[1529]: time="2025-09-12T17:10:54.631600831Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:54.634250 containerd[1529]: time="2025-09-12T17:10:54.634205111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:54.635321 containerd[1529]: time="2025-09-12T17:10:54.635294111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.88234456s" Sep 12 17:10:54.635371 containerd[1529]: time="2025-09-12T17:10:54.635325671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:10:54.636008 containerd[1529]: time="2025-09-12T17:10:54.635986071Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:10:55.055943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913011830.mount: Deactivated successfully. Sep 12 17:10:55.061655 containerd[1529]: time="2025-09-12T17:10:55.061108711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:55.062499 containerd[1529]: time="2025-09-12T17:10:55.062467751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:10:55.063597 containerd[1529]: time="2025-09-12T17:10:55.063563991Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:55.065749 containerd[1529]: time="2025-09-12T17:10:55.065713311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:55.066348 containerd[1529]: time="2025-09-12T17:10:55.066318871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 430.30216ms" Sep 12 17:10:55.066388 containerd[1529]: time="2025-09-12T17:10:55.066352271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:10:55.067051 containerd[1529]: time="2025-09-12T17:10:55.067023711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:10:55.608108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200274246.mount: Deactivated successfully. Sep 12 17:10:57.194995 containerd[1529]: time="2025-09-12T17:10:57.194921271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:57.196199 containerd[1529]: time="2025-09-12T17:10:57.196142391Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 12 17:10:57.197440 containerd[1529]: time="2025-09-12T17:10:57.197410391Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:57.200192 containerd[1529]: time="2025-09-12T17:10:57.200147631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:57.201426 containerd[1529]: time="2025-09-12T17:10:57.201388551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.13433044s" Sep 12 17:10:57.201426 containerd[1529]: time="2025-09-12T17:10:57.201424551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:11:00.730352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:11:00.732487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:11:00.895935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:00.905460 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:11:00.949786 kubelet[2204]: E0912 17:11:00.948644 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:11:00.954470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:11:00.955010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:11:00.955886 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.3M memory peak. Sep 12 17:11:01.185984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:01.186729 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.3M memory peak. Sep 12 17:11:01.190606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:11:01.221172 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-7.scope)... Sep 12 17:11:01.221186 systemd[1]: Reloading... Sep 12 17:11:01.278236 zram_generator::config[2263]: No configuration found. Sep 12 17:11:01.448385 systemd[1]: Reloading finished in 226 ms. Sep 12 17:11:01.513788 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:11:01.513872 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:11:01.514155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:01.514202 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Sep 12 17:11:01.515704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:11:01.649055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:01.661458 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:11:01.696543 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:11:01.696543 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:11:01.696543 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:11:01.696880 kubelet[2308]: I0912 17:11:01.696579 2308 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:11:02.529164 kubelet[2308]: I0912 17:11:02.528774 2308 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:11:02.529164 kubelet[2308]: I0912 17:11:02.528811 2308 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:11:02.529164 kubelet[2308]: I0912 17:11:02.529069 2308 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:11:02.553588 kubelet[2308]: E0912 17:11:02.553525 2308 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:11:02.554161 kubelet[2308]: I0912 17:11:02.554038 2308 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:11:02.562969 kubelet[2308]: I0912 17:11:02.562946 2308 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:11:02.566756 kubelet[2308]: I0912 17:11:02.566719 2308 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:11:02.567611 kubelet[2308]: I0912 17:11:02.567559 2308 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:11:02.567762 kubelet[2308]: I0912 17:11:02.567725 2308 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:11:02.567939 kubelet[2308]: I0912 17:11:02.567755 2308 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:11:02.568066 kubelet[2308]: I0912 17:11:02.568053 2308 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:11:02.568066 kubelet[2308]: I0912 17:11:02.568064 2308 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:11:02.568363 kubelet[2308]: I0912 17:11:02.568325 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:11:02.570271 kubelet[2308]: I0912 17:11:02.570238 2308 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:11:02.570271 kubelet[2308]: I0912 17:11:02.570271 2308 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:11:02.570903 kubelet[2308]: I0912 17:11:02.570297 2308 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:11:02.570903 kubelet[2308]: I0912 17:11:02.570313 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:11:02.574006 kubelet[2308]: W0912 17:11:02.573907 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 12 17:11:02.574006 kubelet[2308]: E0912 17:11:02.573976 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:11:02.574285 kubelet[2308]: I0912 17:11:02.574262 2308 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:11:02.575193 kubelet[2308]: I0912 17:11:02.575171 2308 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:11:02.575668 kubelet[2308]: W0912 17:11:02.575476 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 12 17:11:02.575779 kubelet[2308]: E0912 17:11:02.575761 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:11:02.575994 kubelet[2308]: W0912 17:11:02.575974 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:11:02.578146 kubelet[2308]: I0912 17:11:02.577875 2308 server.go:1274] "Started kubelet" Sep 12 17:11:02.578146 kubelet[2308]: I0912 17:11:02.577950 2308 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:11:02.578288 kubelet[2308]: I0912 17:11:02.578242 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:11:02.581363 kubelet[2308]: I0912 17:11:02.581323 2308 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:11:02.582814 kubelet[2308]: I0912 17:11:02.582781 2308 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:11:02.583765 kubelet[2308]: I0912 17:11:02.583563 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:11:02.584107 kubelet[2308]: E0912 17:11:02.583073 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864982a6daaab8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:11:02.577286031 +0000 UTC m=+0.912652641,LastTimestamp:2025-09-12 17:11:02.577286031 +0000 UTC m=+0.912652641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:11:02.584959 kubelet[2308]: I0912 17:11:02.584938 2308 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:11:02.585887 kubelet[2308]: E0912 17:11:02.585164 2308 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:11:02.585887 kubelet[2308]: I0912 17:11:02.585609 2308 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:11:02.585887 kubelet[2308]: I0912 17:11:02.585717 2308 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:11:02.586741 kubelet[2308]: I0912 17:11:02.586706 2308 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:11:02.586816 kubelet[2308]: I0912 17:11:02.586797 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:11:02.587755 kubelet[2308]: W0912 17:11:02.587032 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 12 17:11:02.587755 kubelet[2308]: E0912 17:11:02.587092 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:11:02.587755 kubelet[2308]: E0912 17:11:02.585406 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Sep 12 17:11:02.587755 kubelet[2308]: I0912 17:11:02.587357 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:11:02.588047 kubelet[2308]: E0912 17:11:02.587926 2308 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:11:02.588644 kubelet[2308]: I0912 17:11:02.588622 2308 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:11:02.596592 kubelet[2308]: I0912 17:11:02.596572 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:11:02.596592 kubelet[2308]: I0912 17:11:02.596587 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:11:02.596744 kubelet[2308]: I0912 17:11:02.596604 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:11:02.600378 kubelet[2308]: I0912 17:11:02.600172 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:11:02.601197 kubelet[2308]: I0912 17:11:02.601175 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:11:02.601197 kubelet[2308]: I0912 17:11:02.601198 2308 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:11:02.601267 kubelet[2308]: I0912 17:11:02.601217 2308 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:11:02.601288 kubelet[2308]: E0912 17:11:02.601264 2308 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:11:02.686332 kubelet[2308]: E0912 17:11:02.686255 2308 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:11:02.701528 kubelet[2308]: E0912 17:11:02.701485 2308 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:11:02.740969 kubelet[2308]: I0912 17:11:02.740923 2308 policy_none.go:49] "None policy: Start" Sep 12 17:11:02.741354 kubelet[2308]: W0912 17:11:02.741294 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Sep 12 17:11:02.741573 kubelet[2308]: E0912 17:11:02.741368 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:11:02.741973 kubelet[2308]: I0912 17:11:02.741953 2308 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:11:02.742055 kubelet[2308]: I0912 17:11:02.741980 2308 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:11:02.749357 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:11:02.770608 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:11:02.773920 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:11:02.788242 kubelet[2308]: E0912 17:11:02.787108 2308 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:11:02.788876 kubelet[2308]: E0912 17:11:02.788840 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Sep 12 17:11:02.790280 kubelet[2308]: I0912 17:11:02.790253 2308 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:11:02.790549 kubelet[2308]: I0912 17:11:02.790533 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:11:02.790647 kubelet[2308]: I0912 17:11:02.790617 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:11:02.791149 kubelet[2308]: I0912 17:11:02.790822 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:11:02.792101 kubelet[2308]: E0912 17:11:02.792080 2308 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:11:02.894191 kubelet[2308]: I0912 17:11:02.893942 2308 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:11:02.894546 kubelet[2308]: E0912 17:11:02.894517 2308 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 12 17:11:02.909995 systemd[1]: Created slice kubepods-burstable-poda3167573c46e4e0b9cb60a9733e13b81.slice - libcontainer container kubepods-burstable-poda3167573c46e4e0b9cb60a9733e13b81.slice. Sep 12 17:11:02.933238 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 17:11:02.937638 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 17:11:02.988545 kubelet[2308]: I0912 17:11:02.988492 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3167573c46e4e0b9cb60a9733e13b81-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a3167573c46e4e0b9cb60a9733e13b81\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:02.988670 kubelet[2308]: I0912 17:11:02.988554 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:11:02.988670 kubelet[2308]: I0912 17:11:02.988582 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3167573c46e4e0b9cb60a9733e13b81-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3167573c46e4e0b9cb60a9733e13b81\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:02.988670 kubelet[2308]: I0912 17:11:02.988598 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3167573c46e4e0b9cb60a9733e13b81-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3167573c46e4e0b9cb60a9733e13b81\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:02.988670 kubelet[2308]: I0912 17:11:02.988613 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:02.988670 kubelet[2308]: I0912 17:11:02.988641 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:02.988770 kubelet[2308]: I0912 17:11:02.988658 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:02.988770 kubelet[2308]: I0912 17:11:02.988675 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:02.988770 kubelet[2308]: I0912 17:11:02.988691 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:03.095970 kubelet[2308]: I0912 17:11:03.095852 2308 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:11:03.096243 kubelet[2308]: E0912 17:11:03.096209 2308 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 12 17:11:03.189999 kubelet[2308]: E0912 17:11:03.189959 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Sep 12 17:11:03.232819 containerd[1529]: time="2025-09-12T17:11:03.232768871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a3167573c46e4e0b9cb60a9733e13b81,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:03.237490 containerd[1529]: time="2025-09-12T17:11:03.237458711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:03.240041 containerd[1529]: time="2025-09-12T17:11:03.239991191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:03.259135 containerd[1529]: time="2025-09-12T17:11:03.259037311Z" level=info msg="connecting to shim 112d4c587676c71360cc772ba1e69284187e38709c1bd24b36060128c4e424d2" address="unix:///run/containerd/s/2138e77524927613bd2cce68fd4186f693841663bc71de9944aae92d3c9b1c5c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:03.272513 containerd[1529]: time="2025-09-12T17:11:03.272472831Z" level=info msg="connecting to shim 52529d4a5725d016ad9eca75a6fa5f049a1e86a7923d6708c8653ff77e56bb4a" address="unix:///run/containerd/s/0b2e831bdb1f93a3c686613db41a0176e6f77021bb0fef07ff8ed32dabd2b96c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:03.272951 containerd[1529]: time="2025-09-12T17:11:03.272870551Z" level=info msg="connecting to shim bf7a194e13f9df13dc7ed9c77459230e941f7d7fbcd93ab1d4e0bce2997a0a57" address="unix:///run/containerd/s/f0957930a0c93632fee162c9dc05797837b05caceca16a96e8fbad6773aa4012" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:03.291322 systemd[1]: Started cri-containerd-112d4c587676c71360cc772ba1e69284187e38709c1bd24b36060128c4e424d2.scope - libcontainer container 112d4c587676c71360cc772ba1e69284187e38709c1bd24b36060128c4e424d2. Sep 12 17:11:03.296155 systemd[1]: Started cri-containerd-52529d4a5725d016ad9eca75a6fa5f049a1e86a7923d6708c8653ff77e56bb4a.scope - libcontainer container 52529d4a5725d016ad9eca75a6fa5f049a1e86a7923d6708c8653ff77e56bb4a. Sep 12 17:11:03.316319 systemd[1]: Started cri-containerd-bf7a194e13f9df13dc7ed9c77459230e941f7d7fbcd93ab1d4e0bce2997a0a57.scope - libcontainer container bf7a194e13f9df13dc7ed9c77459230e941f7d7fbcd93ab1d4e0bce2997a0a57. Sep 12 17:11:03.353650 containerd[1529]: time="2025-09-12T17:11:03.353439231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a3167573c46e4e0b9cb60a9733e13b81,Namespace:kube-system,Attempt:0,} returns sandbox id \"112d4c587676c71360cc772ba1e69284187e38709c1bd24b36060128c4e424d2\"" Sep 12 17:11:03.355727 containerd[1529]: time="2025-09-12T17:11:03.355678791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"52529d4a5725d016ad9eca75a6fa5f049a1e86a7923d6708c8653ff77e56bb4a\"" Sep 12 17:11:03.358093 containerd[1529]: time="2025-09-12T17:11:03.358066191Z" level=info msg="CreateContainer within sandbox \"112d4c587676c71360cc772ba1e69284187e38709c1bd24b36060128c4e424d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:11:03.359067 containerd[1529]: time="2025-09-12T17:11:03.358906751Z" level=info msg="CreateContainer within sandbox \"52529d4a5725d016ad9eca75a6fa5f049a1e86a7923d6708c8653ff77e56bb4a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:11:03.360657 containerd[1529]: time="2025-09-12T17:11:03.360597071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf7a194e13f9df13dc7ed9c77459230e941f7d7fbcd93ab1d4e0bce2997a0a57\"" Sep 12 17:11:03.362969 containerd[1529]: time="2025-09-12T17:11:03.362942951Z" level=info msg="CreateContainer within sandbox \"bf7a194e13f9df13dc7ed9c77459230e941f7d7fbcd93ab1d4e0bce2997a0a57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:11:03.369304 containerd[1529]: time="2025-09-12T17:11:03.369262231Z" level=info msg="Container b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:03.370529 containerd[1529]: time="2025-09-12T17:11:03.370497071Z" level=info msg="Container 4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:03.374545 containerd[1529]: time="2025-09-12T17:11:03.374498911Z" level=info msg="Container 52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:03.379823 containerd[1529]: time="2025-09-12T17:11:03.379779951Z" level=info msg="CreateContainer within sandbox \"52529d4a5725d016ad9eca75a6fa5f049a1e86a7923d6708c8653ff77e56bb4a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02\"" Sep 12 17:11:03.381072 containerd[1529]: time="2025-09-12T17:11:03.380519271Z" level=info msg="StartContainer for \"b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02\"" Sep 12 17:11:03.381072 containerd[1529]: time="2025-09-12T17:11:03.380973271Z" level=info msg="CreateContainer within sandbox \"112d4c587676c71360cc772ba1e69284187e38709c1bd24b36060128c4e424d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b\"" Sep 12 17:11:03.381412 containerd[1529]: time="2025-09-12T17:11:03.381381471Z" level=info msg="StartContainer for \"4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b\"" Sep 12 17:11:03.381547 containerd[1529]: time="2025-09-12T17:11:03.381520711Z" level=info msg="connecting to shim b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02" address="unix:///run/containerd/s/0b2e831bdb1f93a3c686613db41a0176e6f77021bb0fef07ff8ed32dabd2b96c" protocol=ttrpc version=3 Sep 12 17:11:03.382384 containerd[1529]: time="2025-09-12T17:11:03.382351711Z" level=info msg="connecting to shim 4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b" address="unix:///run/containerd/s/2138e77524927613bd2cce68fd4186f693841663bc71de9944aae92d3c9b1c5c" protocol=ttrpc version=3 Sep 12 17:11:03.383828 containerd[1529]: time="2025-09-12T17:11:03.383802151Z" level=info msg="CreateContainer within sandbox \"bf7a194e13f9df13dc7ed9c77459230e941f7d7fbcd93ab1d4e0bce2997a0a57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc\"" Sep 12 17:11:03.384828 containerd[1529]: time="2025-09-12T17:11:03.384802511Z" level=info msg="StartContainer for \"52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc\"" Sep 12 17:11:03.385832 containerd[1529]: time="2025-09-12T17:11:03.385801951Z" level=info msg="connecting to shim 52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc" address="unix:///run/containerd/s/f0957930a0c93632fee162c9dc05797837b05caceca16a96e8fbad6773aa4012" protocol=ttrpc version=3 Sep 12 17:11:03.406303 systemd[1]: Started cri-containerd-4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b.scope - libcontainer container 4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b. Sep 12 17:11:03.407393 systemd[1]: Started cri-containerd-b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02.scope - libcontainer container b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02. Sep 12 17:11:03.410375 systemd[1]: Started cri-containerd-52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc.scope - libcontainer container 52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc. Sep 12 17:11:03.457781 containerd[1529]: time="2025-09-12T17:11:03.457701351Z" level=info msg="StartContainer for \"b3ca4f30c382c9b073f18eaa85b23f29065e2cff38e3d388a1fbdd70689eae02\" returns successfully" Sep 12 17:11:03.461354 containerd[1529]: time="2025-09-12T17:11:03.461276471Z" level=info msg="StartContainer for \"52fbb2bb140b00443ad7192704b74d5b4530d676a954b1c5194d1c5e4091e7fc\" returns successfully" Sep 12 17:11:03.463526 containerd[1529]: time="2025-09-12T17:11:03.463491311Z" level=info msg="StartContainer for \"4782d59db8fcfd3f2d321a722cfe321fe9a9ec7ebefb4a3cc92b75ea73f8b06b\" returns successfully" Sep 12 17:11:03.499633 kubelet[2308]: I0912 17:11:03.499561 2308 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:11:03.500045 kubelet[2308]: E0912 17:11:03.500000 2308 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Sep 12 17:11:04.301684 kubelet[2308]: I0912 17:11:04.301655 2308 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:11:04.704937 kubelet[2308]: E0912 17:11:04.704821 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:11:04.777328 kubelet[2308]: I0912 17:11:04.777289 2308 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:11:04.777328 kubelet[2308]: E0912 17:11:04.777326 2308 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:11:05.572140 kubelet[2308]: I0912 17:11:05.572050 2308 apiserver.go:52] "Watching apiserver" Sep 12 17:11:05.586204 kubelet[2308]: I0912 17:11:05.586157 2308 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:11:07.098580 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Sep 12 17:11:07.098602 systemd[1]: Reloading... Sep 12 17:11:07.158145 zram_generator::config[2628]: No configuration found. Sep 12 17:11:07.352779 systemd[1]: Reloading finished in 253 ms. Sep 12 17:11:07.374132 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:11:07.396519 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:11:07.396758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:07.396820 systemd[1]: kubelet.service: Consumed 1.305s CPU time, 130.6M memory peak. Sep 12 17:11:07.399291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:11:07.573600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:07.577976 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:11:07.630283 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:11:07.630283 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:11:07.630283 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:11:07.630765 kubelet[2670]: I0912 17:11:07.630199 2670 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:11:07.636970 kubelet[2670]: I0912 17:11:07.636934 2670 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:11:07.636970 kubelet[2670]: I0912 17:11:07.636961 2670 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:11:07.637431 kubelet[2670]: I0912 17:11:07.637351 2670 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:11:07.639434 kubelet[2670]: I0912 17:11:07.639410 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:11:07.641916 kubelet[2670]: I0912 17:11:07.641803 2670 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:11:07.645405 kubelet[2670]: I0912 17:11:07.645386 2670 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:11:07.647922 kubelet[2670]: I0912 17:11:07.647863 2670 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:11:07.648057 kubelet[2670]: I0912 17:11:07.648046 2670 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:11:07.648340 kubelet[2670]: I0912 17:11:07.648308 2670 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:11:07.648584 kubelet[2670]: I0912 17:11:07.648411 2670 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:11:07.648707 kubelet[2670]: I0912 17:11:07.648692 2670 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:11:07.648790 kubelet[2670]: I0912 17:11:07.648782 2670 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:11:07.648874 kubelet[2670]: I0912 17:11:07.648865 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:11:07.649027 kubelet[2670]: I0912 17:11:07.649017 2670 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:11:07.649675 kubelet[2670]: I0912 17:11:07.649628 2670 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:11:07.649793 kubelet[2670]: I0912 17:11:07.649781 2670 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:11:07.649852 kubelet[2670]: I0912 17:11:07.649844 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:11:07.650690 kubelet[2670]: I0912 17:11:07.650502 2670 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:11:07.651630 kubelet[2670]: I0912 17:11:07.651208 2670 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:11:07.653201 kubelet[2670]: I0912 17:11:07.652212 2670 server.go:1274] "Started kubelet" Sep 12 17:11:07.653201 kubelet[2670]: I0912 17:11:07.652552 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:11:07.653201 kubelet[2670]: I0912 17:11:07.652925 2670 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:11:07.653201 kubelet[2670]: I0912 17:11:07.653039 2670 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:11:07.654793 kubelet[2670]: I0912 17:11:07.654567 2670 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:11:07.655084 kubelet[2670]: I0912 17:11:07.654894 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:11:07.656214 kubelet[2670]: I0912 17:11:07.655763 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:11:07.656360 kubelet[2670]: I0912 17:11:07.656299 2670 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:11:07.656750 kubelet[2670]: I0912 17:11:07.656690 2670 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:11:07.657023 kubelet[2670]: E0912 17:11:07.657001 2670 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:11:07.657132 kubelet[2670]: I0912 17:11:07.657019 2670 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:11:07.659593 kubelet[2670]: I0912 17:11:07.659562 2670 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:11:07.659816 kubelet[2670]: E0912 17:11:07.659791 2670 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:11:07.660510 kubelet[2670]: I0912 17:11:07.660472 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:11:07.680967 kubelet[2670]: I0912 17:11:07.680921 2670 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:11:07.682706 kubelet[2670]: I0912 17:11:07.682646 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:11:07.683703 kubelet[2670]: I0912 17:11:07.683619 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:11:07.683703 kubelet[2670]: I0912 17:11:07.683651 2670 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:11:07.683703 kubelet[2670]: I0912 17:11:07.683670 2670 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:11:07.683819 kubelet[2670]: E0912 17:11:07.683712 2670 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:11:07.714779 kubelet[2670]: I0912 17:11:07.714742 2670 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:11:07.714779 kubelet[2670]: I0912 17:11:07.714765 2670 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:11:07.714779 kubelet[2670]: I0912 17:11:07.714787 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:11:07.714958 kubelet[2670]: I0912 17:11:07.714941 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:11:07.714982 kubelet[2670]: I0912 17:11:07.714956 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:11:07.714982 kubelet[2670]: I0912 17:11:07.714973 2670 policy_none.go:49] "None policy: Start" Sep 12 17:11:07.715829 kubelet[2670]: I0912 17:11:07.715734 2670 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:11:07.715829 kubelet[2670]: I0912 17:11:07.715776 2670 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:11:07.715975 kubelet[2670]: I0912 17:11:07.715906 2670 state_mem.go:75] "Updated machine memory state" Sep 12 17:11:07.720283 kubelet[2670]: I0912 17:11:07.720244 2670 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:11:07.720448 kubelet[2670]: I0912 17:11:07.720412 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:11:07.720491 kubelet[2670]: I0912 17:11:07.720431 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:11:07.720779 kubelet[2670]: I0912 17:11:07.720731 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:11:07.796787 kubelet[2670]: E0912 17:11:07.796739 2670 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:07.822231 kubelet[2670]: I0912 17:11:07.822187 2670 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:11:07.831091 kubelet[2670]: I0912 17:11:07.831052 2670 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:11:07.831196 kubelet[2670]: I0912 17:11:07.831169 2670 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:11:07.858377 kubelet[2670]: I0912 17:11:07.858322 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3167573c46e4e0b9cb60a9733e13b81-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3167573c46e4e0b9cb60a9733e13b81\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:07.858377 kubelet[2670]: I0912 17:11:07.858361 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3167573c46e4e0b9cb60a9733e13b81-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a3167573c46e4e0b9cb60a9733e13b81\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:07.858377 kubelet[2670]: I0912 17:11:07.858384 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:07.858547 kubelet[2670]: I0912 17:11:07.858435 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:07.858547 kubelet[2670]: I0912 17:11:07.858471 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3167573c46e4e0b9cb60a9733e13b81-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3167573c46e4e0b9cb60a9733e13b81\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:11:07.858547 kubelet[2670]: I0912 17:11:07.858491 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:07.858547 kubelet[2670]: I0912 17:11:07.858508 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:07.858547 kubelet[2670]: I0912 17:11:07.858525 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:11:07.858654 kubelet[2670]: I0912 17:11:07.858541 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:11:08.650907 kubelet[2670]: I0912 17:11:08.650816 2670 apiserver.go:52] "Watching apiserver" Sep 12 17:11:08.659156 kubelet[2670]: I0912 17:11:08.658537 2670 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:11:08.685688 kubelet[2670]: I0912 17:11:08.684879 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.684860831 podStartE2EDuration="1.684860831s" podCreationTimestamp="2025-09-12 17:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:08.684851671 +0000 UTC m=+1.104026681" watchObservedRunningTime="2025-09-12 17:11:08.684860831 +0000 UTC m=+1.104035841" Sep 12 17:11:08.708363 kubelet[2670]: I0912 17:11:08.708283 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.7082642310000002 podStartE2EDuration="3.708264231s" podCreationTimestamp="2025-09-12 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:08.698414431 +0000 UTC m=+1.117589441" watchObservedRunningTime="2025-09-12 17:11:08.708264231 +0000 UTC m=+1.127439241" Sep 12 17:11:08.719335 kubelet[2670]: I0912 17:11:08.719184 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7191643110000001 podStartE2EDuration="1.719164311s" podCreationTimestamp="2025-09-12 17:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:08.708676631 +0000 UTC m=+1.127851641" watchObservedRunningTime="2025-09-12 17:11:08.719164311 +0000 UTC m=+1.138339321" Sep 12 17:11:12.692113 kubelet[2670]: I0912 17:11:12.692066 2670 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:11:12.693318 kubelet[2670]: I0912 17:11:12.692761 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:11:12.693356 containerd[1529]: time="2025-09-12T17:11:12.692477090Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:11:13.396928 kubelet[2670]: I0912 17:11:13.396507 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/753722ec-8d65-4ff2-97e2-839848d9b4ff-xtables-lock\") pod \"kube-proxy-p478n\" (UID: \"753722ec-8d65-4ff2-97e2-839848d9b4ff\") " pod="kube-system/kube-proxy-p478n" Sep 12 17:11:13.396928 kubelet[2670]: I0912 17:11:13.396547 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/753722ec-8d65-4ff2-97e2-839848d9b4ff-lib-modules\") pod \"kube-proxy-p478n\" (UID: \"753722ec-8d65-4ff2-97e2-839848d9b4ff\") " pod="kube-system/kube-proxy-p478n" Sep 12 17:11:13.396928 kubelet[2670]: I0912 17:11:13.396567 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6bw\" (UniqueName: \"kubernetes.io/projected/753722ec-8d65-4ff2-97e2-839848d9b4ff-kube-api-access-7h6bw\") pod \"kube-proxy-p478n\" (UID: \"753722ec-8d65-4ff2-97e2-839848d9b4ff\") " pod="kube-system/kube-proxy-p478n" Sep 12 17:11:13.396928 kubelet[2670]: I0912 17:11:13.396586 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/753722ec-8d65-4ff2-97e2-839848d9b4ff-kube-proxy\") pod \"kube-proxy-p478n\" (UID: \"753722ec-8d65-4ff2-97e2-839848d9b4ff\") " pod="kube-system/kube-proxy-p478n" Sep 12 17:11:13.396813 systemd[1]: Created slice kubepods-besteffort-pod753722ec_8d65_4ff2_97e2_839848d9b4ff.slice - libcontainer container kubepods-besteffort-pod753722ec_8d65_4ff2_97e2_839848d9b4ff.slice. Sep 12 17:11:13.719308 containerd[1529]: time="2025-09-12T17:11:13.719215713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p478n,Uid:753722ec-8d65-4ff2-97e2-839848d9b4ff,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:13.742620 containerd[1529]: time="2025-09-12T17:11:13.742574149Z" level=info msg="connecting to shim e5c4f5bb4e4682fa2084f48cc2fa547a350b367715894e4e405aee94765b5e76" address="unix:///run/containerd/s/a91f6a465156f6201e678df8faf6bba137903355577666773e4d016cea4b0b41" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:13.765354 systemd[1]: Started cri-containerd-e5c4f5bb4e4682fa2084f48cc2fa547a350b367715894e4e405aee94765b5e76.scope - libcontainer container e5c4f5bb4e4682fa2084f48cc2fa547a350b367715894e4e405aee94765b5e76. Sep 12 17:11:13.806815 containerd[1529]: time="2025-09-12T17:11:13.806760449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p478n,Uid:753722ec-8d65-4ff2-97e2-839848d9b4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5c4f5bb4e4682fa2084f48cc2fa547a350b367715894e4e405aee94765b5e76\"" Sep 12 17:11:13.810218 containerd[1529]: time="2025-09-12T17:11:13.810152021Z" level=info msg="CreateContainer within sandbox \"e5c4f5bb4e4682fa2084f48cc2fa547a350b367715894e4e405aee94765b5e76\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:11:13.830038 systemd[1]: Created slice kubepods-besteffort-podc37daa09_fb79_4294_aa3c_fbccc338f6dc.slice - libcontainer container kubepods-besteffort-podc37daa09_fb79_4294_aa3c_fbccc338f6dc.slice. Sep 12 17:11:13.879973 containerd[1529]: time="2025-09-12T17:11:13.879302156Z" level=info msg="Container babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:13.889768 containerd[1529]: time="2025-09-12T17:11:13.889709755Z" level=info msg="CreateContainer within sandbox \"e5c4f5bb4e4682fa2084f48cc2fa547a350b367715894e4e405aee94765b5e76\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d\"" Sep 12 17:11:13.890392 containerd[1529]: time="2025-09-12T17:11:13.890364125Z" level=info msg="StartContainer for \"babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d\"" Sep 12 17:11:13.892214 containerd[1529]: time="2025-09-12T17:11:13.892146312Z" level=info msg="connecting to shim babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d" address="unix:///run/containerd/s/a91f6a465156f6201e678df8faf6bba137903355577666773e4d016cea4b0b41" protocol=ttrpc version=3 Sep 12 17:11:13.900103 kubelet[2670]: I0912 17:11:13.900058 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdv5\" (UniqueName: \"kubernetes.io/projected/c37daa09-fb79-4294-aa3c-fbccc338f6dc-kube-api-access-rxdv5\") pod \"tigera-operator-58fc44c59b-whw9b\" (UID: \"c37daa09-fb79-4294-aa3c-fbccc338f6dc\") " pod="tigera-operator/tigera-operator-58fc44c59b-whw9b" Sep 12 17:11:13.900411 kubelet[2670]: I0912 17:11:13.900113 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c37daa09-fb79-4294-aa3c-fbccc338f6dc-var-lib-calico\") pod \"tigera-operator-58fc44c59b-whw9b\" (UID: \"c37daa09-fb79-4294-aa3c-fbccc338f6dc\") " pod="tigera-operator/tigera-operator-58fc44c59b-whw9b" Sep 12 17:11:13.912380 systemd[1]: Started cri-containerd-babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d.scope - libcontainer container babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d. Sep 12 17:11:13.943931 containerd[1529]: time="2025-09-12T17:11:13.943224412Z" level=info msg="StartContainer for \"babe75a7906da24585c9b834392c77c632132add6a393a04b185c1b590ed937d\" returns successfully" Sep 12 17:11:14.133209 containerd[1529]: time="2025-09-12T17:11:14.133161423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-whw9b,Uid:c37daa09-fb79-4294-aa3c-fbccc338f6dc,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:11:14.149468 containerd[1529]: time="2025-09-12T17:11:14.149425016Z" level=info msg="connecting to shim f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b" address="unix:///run/containerd/s/1a50aa24df337ef0c25573d7e01ce7ebc716c962add493e36a0e8343a09c1648" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:14.173268 systemd[1]: Started cri-containerd-f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b.scope - libcontainer container f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b. Sep 12 17:11:14.206791 containerd[1529]: time="2025-09-12T17:11:14.206719676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-whw9b,Uid:c37daa09-fb79-4294-aa3c-fbccc338f6dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b\"" Sep 12 17:11:14.208776 containerd[1529]: time="2025-09-12T17:11:14.208748905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:11:15.184708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011026103.mount: Deactivated successfully. Sep 12 17:11:15.737894 containerd[1529]: time="2025-09-12T17:11:15.737842523Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:15.738850 containerd[1529]: time="2025-09-12T17:11:15.738709895Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:11:15.739984 containerd[1529]: time="2025-09-12T17:11:15.739945631Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:15.742186 containerd[1529]: time="2025-09-12T17:11:15.742040779Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:15.742977 containerd[1529]: time="2025-09-12T17:11:15.742934191Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.534150646s" Sep 12 17:11:15.742977 containerd[1529]: time="2025-09-12T17:11:15.742970552Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:11:15.745329 containerd[1529]: time="2025-09-12T17:11:15.744890138Z" level=info msg="CreateContainer within sandbox \"f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:11:15.752718 containerd[1529]: time="2025-09-12T17:11:15.752675242Z" level=info msg="Container 6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:15.755267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536404957.mount: Deactivated successfully. Sep 12 17:11:15.759059 containerd[1529]: time="2025-09-12T17:11:15.759008447Z" level=info msg="CreateContainer within sandbox \"f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\"" Sep 12 17:11:15.759732 containerd[1529]: time="2025-09-12T17:11:15.759704736Z" level=info msg="StartContainer for \"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\"" Sep 12 17:11:15.760597 containerd[1529]: time="2025-09-12T17:11:15.760570588Z" level=info msg="connecting to shim 6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef" address="unix:///run/containerd/s/1a50aa24df337ef0c25573d7e01ce7ebc716c962add493e36a0e8343a09c1648" protocol=ttrpc version=3 Sep 12 17:11:15.779272 systemd[1]: Started cri-containerd-6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef.scope - libcontainer container 6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef. Sep 12 17:11:15.804994 containerd[1529]: time="2025-09-12T17:11:15.804955423Z" level=info msg="StartContainer for \"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\" returns successfully" Sep 12 17:11:16.726002 kubelet[2670]: I0912 17:11:16.725494 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p478n" podStartSLOduration=3.725476723 podStartE2EDuration="3.725476723s" podCreationTimestamp="2025-09-12 17:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:14.72838378 +0000 UTC m=+7.147558790" watchObservedRunningTime="2025-09-12 17:11:16.725476723 +0000 UTC m=+9.144651733" Sep 12 17:11:18.070551 systemd[1]: cri-containerd-6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef.scope: Deactivated successfully. Sep 12 17:11:18.107232 containerd[1529]: time="2025-09-12T17:11:18.107169549Z" level=info msg="received exit event container_id:\"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\" id:\"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\" pid:2994 exit_status:1 exited_at:{seconds:1757697078 nanos:95902144}" Sep 12 17:11:18.113363 containerd[1529]: time="2025-09-12T17:11:18.113215976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\" id:\"6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef\" pid:2994 exit_status:1 exited_at:{seconds:1757697078 nanos:95902144}" Sep 12 17:11:18.181878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef-rootfs.mount: Deactivated successfully. Sep 12 17:11:18.731540 kubelet[2670]: I0912 17:11:18.731505 2670 scope.go:117] "RemoveContainer" containerID="6a3787d275b0883f982e8de0b125ddf12388d896d2b4c3c6ba3a20070fed98ef" Sep 12 17:11:18.737708 containerd[1529]: time="2025-09-12T17:11:18.737661677Z" level=info msg="CreateContainer within sandbox \"f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:11:18.749237 containerd[1529]: time="2025-09-12T17:11:18.749082444Z" level=info msg="Container 0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:18.756243 containerd[1529]: time="2025-09-12T17:11:18.756190482Z" level=info msg="CreateContainer within sandbox \"f4ceac3de459a75ef4c8afac31a4068c7fa9ab04c7c71dc877172f4f03bafd4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5\"" Sep 12 17:11:18.758924 containerd[1529]: time="2025-09-12T17:11:18.758887232Z" level=info msg="StartContainer for \"0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5\"" Sep 12 17:11:18.762871 containerd[1529]: time="2025-09-12T17:11:18.762823596Z" level=info msg="connecting to shim 0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5" address="unix:///run/containerd/s/1a50aa24df337ef0c25573d7e01ce7ebc716c962add493e36a0e8343a09c1648" protocol=ttrpc version=3 Sep 12 17:11:18.789440 systemd[1]: Started cri-containerd-0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5.scope - libcontainer container 0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5. Sep 12 17:11:18.841336 containerd[1529]: time="2025-09-12T17:11:18.841298863Z" level=info msg="StartContainer for \"0ee22e792a0fa3d7ebcaf837246fa5bdeaaefa8d6d085239de7ab3fabb0b17c5\" returns successfully" Sep 12 17:11:19.782831 kubelet[2670]: I0912 17:11:19.782547 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-whw9b" podStartSLOduration=5.246831018 podStartE2EDuration="6.782528925s" podCreationTimestamp="2025-09-12 17:11:13 +0000 UTC" firstStartedPulling="2025-09-12 17:11:14.208068975 +0000 UTC m=+6.627243945" lastFinishedPulling="2025-09-12 17:11:15.743766842 +0000 UTC m=+8.162941852" observedRunningTime="2025-09-12 17:11:16.725738246 +0000 UTC m=+9.144913256" watchObservedRunningTime="2025-09-12 17:11:19.782528925 +0000 UTC m=+12.201703935" Sep 12 17:11:21.223100 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 12 17:11:21.224603 sshd[1736]: Connection closed by 10.0.0.1 port 39214 Sep 12 17:11:21.225759 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:21.229793 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:11:21.230010 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:39214.service: Deactivated successfully. Sep 12 17:11:21.231785 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:11:21.232067 systemd[1]: session-7.scope: Consumed 5.815s CPU time, 221.1M memory peak. Sep 12 17:11:21.233852 systemd-logind[1509]: Removed session 7. Sep 12 17:11:23.053678 update_engine[1511]: I20250912 17:11:23.053145 1511 update_attempter.cc:509] Updating boot flags... Sep 12 17:11:28.666180 systemd[1]: Created slice kubepods-besteffort-pod4e5f8996_5ab5_4d5a_8197_da4c8447cefd.slice - libcontainer container kubepods-besteffort-pod4e5f8996_5ab5_4d5a_8197_da4c8447cefd.slice. Sep 12 17:11:28.696981 kubelet[2670]: I0912 17:11:28.696937 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e5f8996-5ab5-4d5a-8197-da4c8447cefd-tigera-ca-bundle\") pod \"calico-typha-67dbc57c94-dld52\" (UID: \"4e5f8996-5ab5-4d5a-8197-da4c8447cefd\") " pod="calico-system/calico-typha-67dbc57c94-dld52" Sep 12 17:11:28.696981 kubelet[2670]: I0912 17:11:28.696982 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4e5f8996-5ab5-4d5a-8197-da4c8447cefd-typha-certs\") pod \"calico-typha-67dbc57c94-dld52\" (UID: \"4e5f8996-5ab5-4d5a-8197-da4c8447cefd\") " pod="calico-system/calico-typha-67dbc57c94-dld52" Sep 12 17:11:28.697385 kubelet[2670]: I0912 17:11:28.697003 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9vk\" (UniqueName: \"kubernetes.io/projected/4e5f8996-5ab5-4d5a-8197-da4c8447cefd-kube-api-access-mc9vk\") pod \"calico-typha-67dbc57c94-dld52\" (UID: \"4e5f8996-5ab5-4d5a-8197-da4c8447cefd\") " pod="calico-system/calico-typha-67dbc57c94-dld52" Sep 12 17:11:28.871100 systemd[1]: Created slice kubepods-besteffort-pod7c1b7bf7_3fd5_45a7_93a8_d1c0b7d9ba82.slice - libcontainer container kubepods-besteffort-pod7c1b7bf7_3fd5_45a7_93a8_d1c0b7d9ba82.slice. Sep 12 17:11:28.899506 kubelet[2670]: I0912 17:11:28.899432 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-tigera-ca-bundle\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899506 kubelet[2670]: I0912 17:11:28.899507 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxd9\" (UniqueName: \"kubernetes.io/projected/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-kube-api-access-bdxd9\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899781 kubelet[2670]: I0912 17:11:28.899530 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-flexvol-driver-host\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899781 kubelet[2670]: I0912 17:11:28.899569 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-xtables-lock\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899781 kubelet[2670]: I0912 17:11:28.899588 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-cni-bin-dir\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899781 kubelet[2670]: I0912 17:11:28.899604 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-cni-net-dir\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899781 kubelet[2670]: I0912 17:11:28.899641 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-var-run-calico\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.899887 kubelet[2670]: I0912 17:11:28.899662 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-lib-modules\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.900082 kubelet[2670]: I0912 17:11:28.899933 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-policysync\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.900082 kubelet[2670]: I0912 17:11:28.899975 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-cni-log-dir\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.900082 kubelet[2670]: I0912 17:11:28.900009 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-node-certs\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.900082 kubelet[2670]: I0912 17:11:28.900026 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82-var-lib-calico\") pod \"calico-node-96c9d\" (UID: \"7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82\") " pod="calico-system/calico-node-96c9d" Sep 12 17:11:28.975666 containerd[1529]: time="2025-09-12T17:11:28.974689595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67dbc57c94-dld52,Uid:4e5f8996-5ab5-4d5a-8197-da4c8447cefd,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:29.020330 kubelet[2670]: E0912 17:11:29.020026 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.020330 kubelet[2670]: W0912 17:11:29.020053 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.020330 kubelet[2670]: E0912 17:11:29.020085 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.022304 containerd[1529]: time="2025-09-12T17:11:29.022264303Z" level=info msg="connecting to shim 5f82387ff6e6e48b8a8883225bbe1933717877483b3f47b23455eb2a25552aec" address="unix:///run/containerd/s/99acb16a2f0988ac3cdc78c2b3cee4f8b74bdd517e584ef6d55e7e1d2b1877ea" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:29.026578 kubelet[2670]: E0912 17:11:29.025000 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.026578 kubelet[2670]: W0912 17:11:29.025027 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.026578 kubelet[2670]: E0912 17:11:29.025304 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.063330 systemd[1]: Started cri-containerd-5f82387ff6e6e48b8a8883225bbe1933717877483b3f47b23455eb2a25552aec.scope - libcontainer container 5f82387ff6e6e48b8a8883225bbe1933717877483b3f47b23455eb2a25552aec. Sep 12 17:11:29.113218 kubelet[2670]: E0912 17:11:29.111855 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkw85" podUID="8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11" Sep 12 17:11:29.161784 containerd[1529]: time="2025-09-12T17:11:29.161738981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67dbc57c94-dld52,Uid:4e5f8996-5ab5-4d5a-8197-da4c8447cefd,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f82387ff6e6e48b8a8883225bbe1933717877483b3f47b23455eb2a25552aec\"" Sep 12 17:11:29.163927 containerd[1529]: time="2025-09-12T17:11:29.163892713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:11:29.175749 containerd[1529]: time="2025-09-12T17:11:29.175585177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-96c9d,Uid:7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:29.190969 kubelet[2670]: E0912 17:11:29.190919 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.190969 kubelet[2670]: W0912 17:11:29.190946 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.190969 kubelet[2670]: E0912 17:11:29.190967 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.191172 kubelet[2670]: E0912 17:11:29.191145 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.191172 kubelet[2670]: W0912 17:11:29.191153 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.191172 kubelet[2670]: E0912 17:11:29.191161 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.191350 kubelet[2670]: E0912 17:11:29.191298 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.191350 kubelet[2670]: W0912 17:11:29.191309 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.191350 kubelet[2670]: E0912 17:11:29.191317 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.191527 kubelet[2670]: E0912 17:11:29.191443 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.191527 kubelet[2670]: W0912 17:11:29.191450 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.191527 kubelet[2670]: E0912 17:11:29.191458 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.191621 kubelet[2670]: E0912 17:11:29.191594 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.191621 kubelet[2670]: W0912 17:11:29.191602 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.191621 kubelet[2670]: E0912 17:11:29.191609 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.191836 kubelet[2670]: E0912 17:11:29.191769 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.191836 kubelet[2670]: W0912 17:11:29.191776 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.191836 kubelet[2670]: E0912 17:11:29.191783 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192034 kubelet[2670]: E0912 17:11:29.191917 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192034 kubelet[2670]: W0912 17:11:29.191925 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192034 kubelet[2670]: E0912 17:11:29.191932 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192225 kubelet[2670]: E0912 17:11:29.192070 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192225 kubelet[2670]: W0912 17:11:29.192079 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192225 kubelet[2670]: E0912 17:11:29.192087 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192321 kubelet[2670]: E0912 17:11:29.192236 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192321 kubelet[2670]: W0912 17:11:29.192244 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192321 kubelet[2670]: E0912 17:11:29.192252 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192469 kubelet[2670]: E0912 17:11:29.192386 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192469 kubelet[2670]: W0912 17:11:29.192393 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192469 kubelet[2670]: E0912 17:11:29.192400 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192557 kubelet[2670]: E0912 17:11:29.192519 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192557 kubelet[2670]: W0912 17:11:29.192526 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192557 kubelet[2670]: E0912 17:11:29.192533 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192713 kubelet[2670]: E0912 17:11:29.192653 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192713 kubelet[2670]: W0912 17:11:29.192660 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192713 kubelet[2670]: E0912 17:11:29.192666 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192815 kubelet[2670]: E0912 17:11:29.192800 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192815 kubelet[2670]: W0912 17:11:29.192811 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192963 kubelet[2670]: E0912 17:11:29.192819 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.192963 kubelet[2670]: E0912 17:11:29.192939 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.192963 kubelet[2670]: W0912 17:11:29.192947 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.192963 kubelet[2670]: E0912 17:11:29.192955 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.193165 kubelet[2670]: E0912 17:11:29.193074 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.193165 kubelet[2670]: W0912 17:11:29.193081 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.193165 kubelet[2670]: E0912 17:11:29.193088 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.193331 kubelet[2670]: E0912 17:11:29.193249 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.193331 kubelet[2670]: W0912 17:11:29.193256 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.193331 kubelet[2670]: E0912 17:11:29.193263 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.193414 kubelet[2670]: E0912 17:11:29.193408 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.193434 kubelet[2670]: W0912 17:11:29.193416 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.193434 kubelet[2670]: E0912 17:11:29.193423 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.193584 kubelet[2670]: E0912 17:11:29.193558 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.193584 kubelet[2670]: W0912 17:11:29.193565 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.193584 kubelet[2670]: E0912 17:11:29.193573 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.193786 kubelet[2670]: E0912 17:11:29.193739 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.193786 kubelet[2670]: W0912 17:11:29.193753 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.193786 kubelet[2670]: E0912 17:11:29.193762 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.193966 kubelet[2670]: E0912 17:11:29.193943 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.193966 kubelet[2670]: W0912 17:11:29.193956 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.193966 kubelet[2670]: E0912 17:11:29.193964 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.198705 containerd[1529]: time="2025-09-12T17:11:29.198670422Z" level=info msg="connecting to shim 6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b" address="unix:///run/containerd/s/a483080ab196761ee2640addf93710438cd8d487d8f2ac3d2af9a2c86552d057" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:29.203473 kubelet[2670]: E0912 17:11:29.203443 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.203473 kubelet[2670]: W0912 17:11:29.203465 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.203565 kubelet[2670]: E0912 17:11:29.203482 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.203565 kubelet[2670]: I0912 17:11:29.203517 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11-registration-dir\") pod \"csi-node-driver-nkw85\" (UID: \"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11\") " pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:29.203708 kubelet[2670]: E0912 17:11:29.203682 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.203708 kubelet[2670]: W0912 17:11:29.203694 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.203708 kubelet[2670]: E0912 17:11:29.203707 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.203779 kubelet[2670]: I0912 17:11:29.203721 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11-varrun\") pod \"csi-node-driver-nkw85\" (UID: \"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11\") " pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:29.203901 kubelet[2670]: E0912 17:11:29.203881 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.203901 kubelet[2670]: W0912 17:11:29.203893 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.203957 kubelet[2670]: E0912 17:11:29.203910 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.203957 kubelet[2670]: I0912 17:11:29.203925 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km7jm\" (UniqueName: \"kubernetes.io/projected/8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11-kube-api-access-km7jm\") pod \"csi-node-driver-nkw85\" (UID: \"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11\") " pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:29.204097 kubelet[2670]: E0912 17:11:29.204081 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.204097 kubelet[2670]: W0912 17:11:29.204093 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.204097 kubelet[2670]: E0912 17:11:29.204110 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.204097 kubelet[2670]: I0912 17:11:29.204137 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11-kubelet-dir\") pod \"csi-node-driver-nkw85\" (UID: \"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11\") " pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:29.204395 kubelet[2670]: E0912 17:11:29.204281 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.204395 kubelet[2670]: W0912 17:11:29.204291 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.204395 kubelet[2670]: E0912 17:11:29.204308 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.204395 kubelet[2670]: I0912 17:11:29.204340 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11-socket-dir\") pod \"csi-node-driver-nkw85\" (UID: \"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11\") " pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:29.204529 kubelet[2670]: E0912 17:11:29.204496 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.204529 kubelet[2670]: W0912 17:11:29.204508 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.204529 kubelet[2670]: E0912 17:11:29.204522 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.204671 kubelet[2670]: E0912 17:11:29.204649 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.204671 kubelet[2670]: W0912 17:11:29.204656 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.204671 kubelet[2670]: E0912 17:11:29.204670 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.204848 kubelet[2670]: E0912 17:11:29.204801 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.204848 kubelet[2670]: W0912 17:11:29.204808 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.204848 kubelet[2670]: E0912 17:11:29.204822 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205001 kubelet[2670]: E0912 17:11:29.204928 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205001 kubelet[2670]: W0912 17:11:29.204938 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205001 kubelet[2670]: E0912 17:11:29.204969 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205108 kubelet[2670]: E0912 17:11:29.205060 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205108 kubelet[2670]: W0912 17:11:29.205076 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205108 kubelet[2670]: E0912 17:11:29.205100 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205226 kubelet[2670]: E0912 17:11:29.205215 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205226 kubelet[2670]: W0912 17:11:29.205223 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205373 kubelet[2670]: E0912 17:11:29.205247 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205373 kubelet[2670]: E0912 17:11:29.205354 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205373 kubelet[2670]: W0912 17:11:29.205361 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205373 kubelet[2670]: E0912 17:11:29.205373 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205506 kubelet[2670]: E0912 17:11:29.205483 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205506 kubelet[2670]: W0912 17:11:29.205490 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205506 kubelet[2670]: E0912 17:11:29.205496 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205691 kubelet[2670]: E0912 17:11:29.205625 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205691 kubelet[2670]: W0912 17:11:29.205633 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205691 kubelet[2670]: E0912 17:11:29.205640 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.205788 kubelet[2670]: E0912 17:11:29.205774 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.205788 kubelet[2670]: W0912 17:11:29.205786 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.205928 kubelet[2670]: E0912 17:11:29.205794 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.234341 systemd[1]: Started cri-containerd-6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b.scope - libcontainer container 6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b. Sep 12 17:11:29.284510 containerd[1529]: time="2025-09-12T17:11:29.284455568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-96c9d,Uid:7c1b7bf7-3fd5-45a7-93a8-d1c0b7d9ba82,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\"" Sep 12 17:11:29.305008 kubelet[2670]: E0912 17:11:29.304964 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.305008 kubelet[2670]: W0912 17:11:29.304991 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.305008 kubelet[2670]: E0912 17:11:29.305011 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.305230 kubelet[2670]: E0912 17:11:29.305203 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.305230 kubelet[2670]: W0912 17:11:29.305215 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.305230 kubelet[2670]: E0912 17:11:29.305227 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.305412 kubelet[2670]: E0912 17:11:29.305382 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.305412 kubelet[2670]: W0912 17:11:29.305395 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.305412 kubelet[2670]: E0912 17:11:29.305410 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.305591 kubelet[2670]: E0912 17:11:29.305555 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.305591 kubelet[2670]: W0912 17:11:29.305568 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.305591 kubelet[2670]: E0912 17:11:29.305583 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.305747 kubelet[2670]: E0912 17:11:29.305735 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.305747 kubelet[2670]: W0912 17:11:29.305746 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.305796 kubelet[2670]: E0912 17:11:29.305759 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.305946 kubelet[2670]: E0912 17:11:29.305912 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.305946 kubelet[2670]: W0912 17:11:29.305925 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.305946 kubelet[2670]: E0912 17:11:29.305938 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306084 kubelet[2670]: E0912 17:11:29.306072 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306124 kubelet[2670]: W0912 17:11:29.306084 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.306124 kubelet[2670]: E0912 17:11:29.306097 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306249 kubelet[2670]: E0912 17:11:29.306237 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306277 kubelet[2670]: W0912 17:11:29.306250 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.306277 kubelet[2670]: E0912 17:11:29.306262 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306431 kubelet[2670]: E0912 17:11:29.306421 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306457 kubelet[2670]: W0912 17:11:29.306433 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.306495 kubelet[2670]: E0912 17:11:29.306460 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306573 kubelet[2670]: E0912 17:11:29.306563 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306573 kubelet[2670]: W0912 17:11:29.306573 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.306620 kubelet[2670]: E0912 17:11:29.306592 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306703 kubelet[2670]: E0912 17:11:29.306692 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306703 kubelet[2670]: W0912 17:11:29.306702 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.306751 kubelet[2670]: E0912 17:11:29.306724 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306829 kubelet[2670]: E0912 17:11:29.306820 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306829 kubelet[2670]: W0912 17:11:29.306828 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.306878 kubelet[2670]: E0912 17:11:29.306847 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.306948 kubelet[2670]: E0912 17:11:29.306939 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.306948 kubelet[2670]: W0912 17:11:29.306949 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.307003 kubelet[2670]: E0912 17:11:29.306968 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.307089 kubelet[2670]: E0912 17:11:29.307078 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.307089 kubelet[2670]: W0912 17:11:29.307089 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.307167 kubelet[2670]: E0912 17:11:29.307102 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.307254 kubelet[2670]: E0912 17:11:29.307244 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.307254 kubelet[2670]: W0912 17:11:29.307254 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.307300 kubelet[2670]: E0912 17:11:29.307266 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.307430 kubelet[2670]: E0912 17:11:29.307420 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.307457 kubelet[2670]: W0912 17:11:29.307430 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.307457 kubelet[2670]: E0912 17:11:29.307442 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.307678 kubelet[2670]: E0912 17:11:29.307663 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.307714 kubelet[2670]: W0912 17:11:29.307679 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.307714 kubelet[2670]: E0912 17:11:29.307696 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.307933 kubelet[2670]: E0912 17:11:29.307922 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.307964 kubelet[2670]: W0912 17:11:29.307934 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.307964 kubelet[2670]: E0912 17:11:29.307949 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.308221 kubelet[2670]: E0912 17:11:29.308206 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.308263 kubelet[2670]: W0912 17:11:29.308222 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.308263 kubelet[2670]: E0912 17:11:29.308250 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.308383 kubelet[2670]: E0912 17:11:29.308372 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.308383 kubelet[2670]: W0912 17:11:29.308383 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.308429 kubelet[2670]: E0912 17:11:29.308402 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.308532 kubelet[2670]: E0912 17:11:29.308522 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.308562 kubelet[2670]: W0912 17:11:29.308534 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.308562 kubelet[2670]: E0912 17:11:29.308548 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.308706 kubelet[2670]: E0912 17:11:29.308694 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.308706 kubelet[2670]: W0912 17:11:29.308705 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.308749 kubelet[2670]: E0912 17:11:29.308718 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.308969 kubelet[2670]: E0912 17:11:29.308954 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.308969 kubelet[2670]: W0912 17:11:29.308968 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.309081 kubelet[2670]: E0912 17:11:29.309019 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.309273 kubelet[2670]: E0912 17:11:29.309257 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.309273 kubelet[2670]: W0912 17:11:29.309272 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.309316 kubelet[2670]: E0912 17:11:29.309298 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.309882 kubelet[2670]: E0912 17:11:29.309842 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.309882 kubelet[2670]: W0912 17:11:29.309865 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.309882 kubelet[2670]: E0912 17:11:29.309880 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:29.319730 kubelet[2670]: E0912 17:11:29.319690 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:29.319730 kubelet[2670]: W0912 17:11:29.319716 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:29.319839 kubelet[2670]: E0912 17:11:29.319742 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:30.397608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398811831.mount: Deactivated successfully. Sep 12 17:11:30.685214 kubelet[2670]: E0912 17:11:30.684930 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkw85" podUID="8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11" Sep 12 17:11:30.907152 containerd[1529]: time="2025-09-12T17:11:30.907031519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:30.907645 containerd[1529]: time="2025-09-12T17:11:30.907589082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:11:30.908385 containerd[1529]: time="2025-09-12T17:11:30.908356766Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:30.910233 containerd[1529]: time="2025-09-12T17:11:30.910198575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:30.910719 containerd[1529]: time="2025-09-12T17:11:30.910699178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.746765065s" Sep 12 17:11:30.910763 containerd[1529]: time="2025-09-12T17:11:30.910725498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:11:30.913785 containerd[1529]: time="2025-09-12T17:11:30.913741073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:11:30.964489 containerd[1529]: time="2025-09-12T17:11:30.964376291Z" level=info msg="CreateContainer within sandbox \"5f82387ff6e6e48b8a8883225bbe1933717877483b3f47b23455eb2a25552aec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:11:30.985267 containerd[1529]: time="2025-09-12T17:11:30.984677155Z" level=info msg="Container d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:30.992996 containerd[1529]: time="2025-09-12T17:11:30.992941117Z" level=info msg="CreateContainer within sandbox \"5f82387ff6e6e48b8a8883225bbe1933717877483b3f47b23455eb2a25552aec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c\"" Sep 12 17:11:30.993781 containerd[1529]: time="2025-09-12T17:11:30.993751641Z" level=info msg="StartContainer for \"d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c\"" Sep 12 17:11:31.004899 containerd[1529]: time="2025-09-12T17:11:31.004840256Z" level=info msg="connecting to shim d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c" address="unix:///run/containerd/s/99acb16a2f0988ac3cdc78c2b3cee4f8b74bdd517e584ef6d55e7e1d2b1877ea" protocol=ttrpc version=3 Sep 12 17:11:31.035381 systemd[1]: Started cri-containerd-d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c.scope - libcontainer container d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c. Sep 12 17:11:31.079022 containerd[1529]: time="2025-09-12T17:11:31.078969090Z" level=info msg="StartContainer for \"d889f79809146985b2c974324be7264c1b792babdbb4c06aec27afbf9dc7944c\" returns successfully" Sep 12 17:11:31.808043 kubelet[2670]: E0912 17:11:31.807961 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.808043 kubelet[2670]: W0912 17:11:31.807987 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.808043 kubelet[2670]: E0912 17:11:31.808009 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808215 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.809801 kubelet[2670]: W0912 17:11:31.808224 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808234 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808365 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.809801 kubelet[2670]: W0912 17:11:31.808373 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808381 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808489 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.809801 kubelet[2670]: W0912 17:11:31.808496 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808502 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.809801 kubelet[2670]: E0912 17:11:31.808655 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810027 kubelet[2670]: W0912 17:11:31.808662 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810027 kubelet[2670]: E0912 17:11:31.808671 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810027 kubelet[2670]: E0912 17:11:31.808838 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810027 kubelet[2670]: W0912 17:11:31.808861 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810027 kubelet[2670]: E0912 17:11:31.808869 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810027 kubelet[2670]: E0912 17:11:31.809015 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810027 kubelet[2670]: W0912 17:11:31.809024 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810027 kubelet[2670]: E0912 17:11:31.809031 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810027 kubelet[2670]: E0912 17:11:31.809210 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810027 kubelet[2670]: W0912 17:11:31.809220 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809229 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809395 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810580 kubelet[2670]: W0912 17:11:31.809403 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809410 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809541 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810580 kubelet[2670]: W0912 17:11:31.809548 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809555 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809683 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810580 kubelet[2670]: W0912 17:11:31.809689 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810580 kubelet[2670]: E0912 17:11:31.809696 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810794 kubelet[2670]: E0912 17:11:31.809845 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810794 kubelet[2670]: W0912 17:11:31.809860 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810794 kubelet[2670]: E0912 17:11:31.809869 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810794 kubelet[2670]: E0912 17:11:31.810572 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810794 kubelet[2670]: W0912 17:11:31.810586 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.810794 kubelet[2670]: E0912 17:11:31.810601 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.810794 kubelet[2670]: E0912 17:11:31.810786 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.810794 kubelet[2670]: W0912 17:11:31.810795 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.811175 kubelet[2670]: E0912 17:11:31.810804 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.811175 kubelet[2670]: E0912 17:11:31.810947 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.811175 kubelet[2670]: W0912 17:11:31.810955 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.811175 kubelet[2670]: E0912 17:11:31.810963 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.826541 kubelet[2670]: E0912 17:11:31.826511 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.826541 kubelet[2670]: W0912 17:11:31.826536 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.826695 kubelet[2670]: E0912 17:11:31.826553 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.826856 kubelet[2670]: E0912 17:11:31.826840 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.826900 kubelet[2670]: W0912 17:11:31.826857 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.826900 kubelet[2670]: E0912 17:11:31.826873 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.827078 kubelet[2670]: E0912 17:11:31.827061 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.827078 kubelet[2670]: W0912 17:11:31.827075 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.827179 kubelet[2670]: E0912 17:11:31.827092 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.827322 kubelet[2670]: E0912 17:11:31.827307 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.827322 kubelet[2670]: W0912 17:11:31.827320 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.827385 kubelet[2670]: E0912 17:11:31.827335 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.827537 kubelet[2670]: E0912 17:11:31.827523 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.827537 kubelet[2670]: W0912 17:11:31.827535 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.827583 kubelet[2670]: E0912 17:11:31.827547 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.827693 kubelet[2670]: E0912 17:11:31.827682 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.827734 kubelet[2670]: W0912 17:11:31.827694 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.827734 kubelet[2670]: E0912 17:11:31.827726 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.827914 kubelet[2670]: E0912 17:11:31.827901 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.827914 kubelet[2670]: W0912 17:11:31.827913 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.827990 kubelet[2670]: E0912 17:11:31.827963 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.828067 kubelet[2670]: E0912 17:11:31.828047 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.828099 kubelet[2670]: W0912 17:11:31.828066 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.828174 kubelet[2670]: E0912 17:11:31.828151 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.828268 kubelet[2670]: E0912 17:11:31.828251 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.828268 kubelet[2670]: W0912 17:11:31.828265 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.828321 kubelet[2670]: E0912 17:11:31.828278 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.828474 kubelet[2670]: E0912 17:11:31.828462 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.828474 kubelet[2670]: W0912 17:11:31.828472 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.828539 kubelet[2670]: E0912 17:11:31.828487 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.828635 kubelet[2670]: E0912 17:11:31.828623 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.828635 kubelet[2670]: W0912 17:11:31.828633 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.828846 kubelet[2670]: E0912 17:11:31.828645 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.828961 kubelet[2670]: E0912 17:11:31.828932 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.829044 kubelet[2670]: W0912 17:11:31.829015 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.829205 kubelet[2670]: E0912 17:11:31.829188 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.829362 kubelet[2670]: E0912 17:11:31.829340 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.829362 kubelet[2670]: W0912 17:11:31.829354 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.829412 kubelet[2670]: E0912 17:11:31.829374 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.829525 kubelet[2670]: E0912 17:11:31.829508 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.829525 kubelet[2670]: W0912 17:11:31.829518 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.829586 kubelet[2670]: E0912 17:11:31.829531 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.829698 kubelet[2670]: E0912 17:11:31.829686 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.829698 kubelet[2670]: W0912 17:11:31.829697 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.829782 kubelet[2670]: E0912 17:11:31.829724 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.830335 kubelet[2670]: E0912 17:11:31.830104 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.830335 kubelet[2670]: W0912 17:11:31.830143 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.830335 kubelet[2670]: E0912 17:11:31.830172 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.830459 kubelet[2670]: E0912 17:11:31.830359 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.830459 kubelet[2670]: W0912 17:11:31.830370 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.830459 kubelet[2670]: E0912 17:11:31.830385 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:31.830551 kubelet[2670]: E0912 17:11:31.830523 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:31.830551 kubelet[2670]: W0912 17:11:31.830548 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:31.830601 kubelet[2670]: E0912 17:11:31.830556 2670 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:32.323425 containerd[1529]: time="2025-09-12T17:11:32.323291497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:32.324159 containerd[1529]: time="2025-09-12T17:11:32.324012420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:11:32.325144 containerd[1529]: time="2025-09-12T17:11:32.325057985Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:32.332166 containerd[1529]: time="2025-09-12T17:11:32.331569854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:32.333972 containerd[1529]: time="2025-09-12T17:11:32.333908185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.420108752s" Sep 12 17:11:32.333972 containerd[1529]: time="2025-09-12T17:11:32.333947385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:11:32.337724 containerd[1529]: time="2025-09-12T17:11:32.337679362Z" level=info msg="CreateContainer within sandbox \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:11:32.346307 containerd[1529]: time="2025-09-12T17:11:32.346255320Z" level=info msg="Container 2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:32.354990 containerd[1529]: time="2025-09-12T17:11:32.354928039Z" level=info msg="CreateContainer within sandbox \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\"" Sep 12 17:11:32.355656 containerd[1529]: time="2025-09-12T17:11:32.355616442Z" level=info msg="StartContainer for \"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\"" Sep 12 17:11:32.357030 containerd[1529]: time="2025-09-12T17:11:32.357000328Z" level=info msg="connecting to shim 2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b" address="unix:///run/containerd/s/a483080ab196761ee2640addf93710438cd8d487d8f2ac3d2af9a2c86552d057" protocol=ttrpc version=3 Sep 12 17:11:32.381394 systemd[1]: Started cri-containerd-2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b.scope - libcontainer container 2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b. Sep 12 17:11:32.424142 containerd[1529]: time="2025-09-12T17:11:32.423870948Z" level=info msg="StartContainer for \"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\" returns successfully" Sep 12 17:11:32.439667 systemd[1]: cri-containerd-2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b.scope: Deactivated successfully. Sep 12 17:11:32.442610 containerd[1529]: time="2025-09-12T17:11:32.442416151Z" level=info msg="received exit event container_id:\"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\" id:\"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\" pid:3414 exited_at:{seconds:1757697092 nanos:442062509}" Sep 12 17:11:32.443862 containerd[1529]: time="2025-09-12T17:11:32.442835592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\" id:\"2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b\" pid:3414 exited_at:{seconds:1757697092 nanos:442062509}" Sep 12 17:11:32.463227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ebbed4b3a64f6afb510e935d449d62e49f1de3fde0b71219948e58cee4ffe1b-rootfs.mount: Deactivated successfully. Sep 12 17:11:32.685234 kubelet[2670]: E0912 17:11:32.684926 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkw85" podUID="8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11" Sep 12 17:11:32.808473 kubelet[2670]: I0912 17:11:32.808416 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:32.810166 containerd[1529]: time="2025-09-12T17:11:32.809228593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:11:32.825757 kubelet[2670]: I0912 17:11:32.825461 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67dbc57c94-dld52" podStartSLOduration=3.075533585 podStartE2EDuration="4.825444186s" podCreationTimestamp="2025-09-12 17:11:28 +0000 UTC" firstStartedPulling="2025-09-12 17:11:29.16338627 +0000 UTC m=+21.582561280" lastFinishedPulling="2025-09-12 17:11:30.913296831 +0000 UTC m=+23.332471881" observedRunningTime="2025-09-12 17:11:31.838756919 +0000 UTC m=+24.257931929" watchObservedRunningTime="2025-09-12 17:11:32.825444186 +0000 UTC m=+25.244619196" Sep 12 17:11:34.684742 kubelet[2670]: E0912 17:11:34.684692 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkw85" podUID="8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11" Sep 12 17:11:36.303845 containerd[1529]: time="2025-09-12T17:11:36.303794922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:36.304867 containerd[1529]: time="2025-09-12T17:11:36.304664085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:11:36.305565 containerd[1529]: time="2025-09-12T17:11:36.305524528Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:36.308143 containerd[1529]: time="2025-09-12T17:11:36.308091777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:36.308545 containerd[1529]: time="2025-09-12T17:11:36.308521458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.498316181s" Sep 12 17:11:36.308590 containerd[1529]: time="2025-09-12T17:11:36.308549099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:11:36.310632 containerd[1529]: time="2025-09-12T17:11:36.310599826Z" level=info msg="CreateContainer within sandbox \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:11:36.319162 containerd[1529]: time="2025-09-12T17:11:36.318622453Z" level=info msg="Container c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:36.327500 containerd[1529]: time="2025-09-12T17:11:36.327447684Z" level=info msg="CreateContainer within sandbox \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\"" Sep 12 17:11:36.328088 containerd[1529]: time="2025-09-12T17:11:36.328014966Z" level=info msg="StartContainer for \"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\"" Sep 12 17:11:36.329762 containerd[1529]: time="2025-09-12T17:11:36.329718812Z" level=info msg="connecting to shim c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b" address="unix:///run/containerd/s/a483080ab196761ee2640addf93710438cd8d487d8f2ac3d2af9a2c86552d057" protocol=ttrpc version=3 Sep 12 17:11:36.348358 systemd[1]: Started cri-containerd-c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b.scope - libcontainer container c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b. Sep 12 17:11:36.389638 containerd[1529]: time="2025-09-12T17:11:36.389573579Z" level=info msg="StartContainer for \"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\" returns successfully" Sep 12 17:11:36.685014 kubelet[2670]: E0912 17:11:36.684492 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkw85" podUID="8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11" Sep 12 17:11:37.031440 systemd[1]: cri-containerd-c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b.scope: Deactivated successfully. Sep 12 17:11:37.031714 systemd[1]: cri-containerd-c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b.scope: Consumed 470ms CPU time, 174.7M memory peak, 1M read from disk, 165.8M written to disk. Sep 12 17:11:37.035112 containerd[1529]: time="2025-09-12T17:11:37.034926284Z" level=info msg="received exit event container_id:\"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\" id:\"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\" pid:3475 exited_at:{seconds:1757697097 nanos:34705723}" Sep 12 17:11:37.035112 containerd[1529]: time="2025-09-12T17:11:37.035027684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\" id:\"c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b\" pid:3475 exited_at:{seconds:1757697097 nanos:34705723}" Sep 12 17:11:37.050946 kubelet[2670]: I0912 17:11:37.050895 2670 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:11:37.063172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3d9733e54e6c8b751d077522fe440194c9ad6e76033357371ee9348aa04d83b-rootfs.mount: Deactivated successfully. Sep 12 17:11:37.163550 kubelet[2670]: I0912 17:11:37.163356 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-ca-bundle\") pod \"whisker-76c5d89797-pbtvp\" (UID: \"8a929edb-eab6-4add-8f46-412e4de3f50e\") " pod="calico-system/whisker-76c5d89797-pbtvp" Sep 12 17:11:37.163550 kubelet[2670]: I0912 17:11:37.163413 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/506ce59b-3fed-457d-9fb4-90edc9572ad2-config-volume\") pod \"coredns-7c65d6cfc9-mh8hq\" (UID: \"506ce59b-3fed-457d-9fb4-90edc9572ad2\") " pod="kube-system/coredns-7c65d6cfc9-mh8hq" Sep 12 17:11:37.163550 kubelet[2670]: I0912 17:11:37.163438 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzpn\" (UniqueName: \"kubernetes.io/projected/f76edb24-7d91-4109-a1e0-6afc117a0f16-kube-api-access-vdzpn\") pod \"coredns-7c65d6cfc9-d9fnv\" (UID: \"f76edb24-7d91-4109-a1e0-6afc117a0f16\") " pod="kube-system/coredns-7c65d6cfc9-d9fnv" Sep 12 17:11:37.163550 kubelet[2670]: I0912 17:11:37.163463 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9569d1b-2f07-443a-86a0-aedb1c7a6901-tigera-ca-bundle\") pod \"calico-kube-controllers-5bd58f486c-dc94g\" (UID: \"b9569d1b-2f07-443a-86a0-aedb1c7a6901\") " pod="calico-system/calico-kube-controllers-5bd58f486c-dc94g" Sep 12 17:11:37.163550 kubelet[2670]: I0912 17:11:37.163484 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d92kc\" (UniqueName: \"kubernetes.io/projected/b9569d1b-2f07-443a-86a0-aedb1c7a6901-kube-api-access-d92kc\") pod \"calico-kube-controllers-5bd58f486c-dc94g\" (UID: \"b9569d1b-2f07-443a-86a0-aedb1c7a6901\") " pod="calico-system/calico-kube-controllers-5bd58f486c-dc94g" Sep 12 17:11:37.165467 kubelet[2670]: I0912 17:11:37.163510 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-backend-key-pair\") pod \"whisker-76c5d89797-pbtvp\" (UID: \"8a929edb-eab6-4add-8f46-412e4de3f50e\") " pod="calico-system/whisker-76c5d89797-pbtvp" Sep 12 17:11:37.165467 kubelet[2670]: I0912 17:11:37.163538 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4gt\" (UniqueName: \"kubernetes.io/projected/3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8-kube-api-access-4j4gt\") pod \"goldmane-7988f88666-jfdwz\" (UID: \"3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8\") " pod="calico-system/goldmane-7988f88666-jfdwz" Sep 12 17:11:37.165467 kubelet[2670]: I0912 17:11:37.163565 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16e247a5-69ba-416d-911c-7b93c8ce7cdc-calico-apiserver-certs\") pod \"calico-apiserver-57884765c9-vlpk6\" (UID: \"16e247a5-69ba-416d-911c-7b93c8ce7cdc\") " pod="calico-apiserver/calico-apiserver-57884765c9-vlpk6" Sep 12 17:11:37.165467 kubelet[2670]: I0912 17:11:37.163586 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8-config\") pod \"goldmane-7988f88666-jfdwz\" (UID: \"3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8\") " pod="calico-system/goldmane-7988f88666-jfdwz" Sep 12 17:11:37.165467 kubelet[2670]: I0912 17:11:37.163606 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f76edb24-7d91-4109-a1e0-6afc117a0f16-config-volume\") pod \"coredns-7c65d6cfc9-d9fnv\" (UID: \"f76edb24-7d91-4109-a1e0-6afc117a0f16\") " pod="kube-system/coredns-7c65d6cfc9-d9fnv" Sep 12 17:11:37.165688 kubelet[2670]: I0912 17:11:37.163627 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwlh\" (UniqueName: \"kubernetes.io/projected/8a929edb-eab6-4add-8f46-412e4de3f50e-kube-api-access-gxwlh\") pod \"whisker-76c5d89797-pbtvp\" (UID: \"8a929edb-eab6-4add-8f46-412e4de3f50e\") " pod="calico-system/whisker-76c5d89797-pbtvp" Sep 12 17:11:37.165688 kubelet[2670]: I0912 17:11:37.163651 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8-goldmane-ca-bundle\") pod \"goldmane-7988f88666-jfdwz\" (UID: \"3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8\") " pod="calico-system/goldmane-7988f88666-jfdwz" Sep 12 17:11:37.165688 kubelet[2670]: I0912 17:11:37.163673 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcm94\" (UniqueName: \"kubernetes.io/projected/16e247a5-69ba-416d-911c-7b93c8ce7cdc-kube-api-access-hcm94\") pod \"calico-apiserver-57884765c9-vlpk6\" (UID: \"16e247a5-69ba-416d-911c-7b93c8ce7cdc\") " pod="calico-apiserver/calico-apiserver-57884765c9-vlpk6" Sep 12 17:11:37.165688 kubelet[2670]: I0912 17:11:37.163698 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75c7d3f4-2baa-4dc5-9d2b-3e31c0555321-calico-apiserver-certs\") pod \"calico-apiserver-57884765c9-86qks\" (UID: \"75c7d3f4-2baa-4dc5-9d2b-3e31c0555321\") " pod="calico-apiserver/calico-apiserver-57884765c9-86qks" Sep 12 17:11:37.165688 kubelet[2670]: I0912 17:11:37.163727 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctznp\" (UniqueName: \"kubernetes.io/projected/75c7d3f4-2baa-4dc5-9d2b-3e31c0555321-kube-api-access-ctznp\") pod \"calico-apiserver-57884765c9-86qks\" (UID: \"75c7d3f4-2baa-4dc5-9d2b-3e31c0555321\") " pod="calico-apiserver/calico-apiserver-57884765c9-86qks" Sep 12 17:11:37.166180 kubelet[2670]: I0912 17:11:37.163748 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2cg\" (UniqueName: \"kubernetes.io/projected/506ce59b-3fed-457d-9fb4-90edc9572ad2-kube-api-access-qb2cg\") pod \"coredns-7c65d6cfc9-mh8hq\" (UID: \"506ce59b-3fed-457d-9fb4-90edc9572ad2\") " pod="kube-system/coredns-7c65d6cfc9-mh8hq" Sep 12 17:11:37.166180 kubelet[2670]: I0912 17:11:37.163768 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8-goldmane-key-pair\") pod \"goldmane-7988f88666-jfdwz\" (UID: \"3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8\") " pod="calico-system/goldmane-7988f88666-jfdwz" Sep 12 17:11:37.170222 systemd[1]: Created slice kubepods-besteffort-pod8a929edb_eab6_4add_8f46_412e4de3f50e.slice - libcontainer container kubepods-besteffort-pod8a929edb_eab6_4add_8f46_412e4de3f50e.slice. Sep 12 17:11:37.177496 systemd[1]: Created slice kubepods-burstable-podf76edb24_7d91_4109_a1e0_6afc117a0f16.slice - libcontainer container kubepods-burstable-podf76edb24_7d91_4109_a1e0_6afc117a0f16.slice. Sep 12 17:11:37.188037 systemd[1]: Created slice kubepods-besteffort-pod3b0f4287_82b3_4e4b_bfe7_2225c26f9fd8.slice - libcontainer container kubepods-besteffort-pod3b0f4287_82b3_4e4b_bfe7_2225c26f9fd8.slice. Sep 12 17:11:37.197774 systemd[1]: Created slice kubepods-besteffort-pod75c7d3f4_2baa_4dc5_9d2b_3e31c0555321.slice - libcontainer container kubepods-besteffort-pod75c7d3f4_2baa_4dc5_9d2b_3e31c0555321.slice. Sep 12 17:11:37.206278 systemd[1]: Created slice kubepods-burstable-pod506ce59b_3fed_457d_9fb4_90edc9572ad2.slice - libcontainer container kubepods-burstable-pod506ce59b_3fed_457d_9fb4_90edc9572ad2.slice. Sep 12 17:11:37.215549 systemd[1]: Created slice kubepods-besteffort-pod16e247a5_69ba_416d_911c_7b93c8ce7cdc.slice - libcontainer container kubepods-besteffort-pod16e247a5_69ba_416d_911c_7b93c8ce7cdc.slice. Sep 12 17:11:37.219719 systemd[1]: Created slice kubepods-besteffort-podb9569d1b_2f07_443a_86a0_aedb1c7a6901.slice - libcontainer container kubepods-besteffort-podb9569d1b_2f07_443a_86a0_aedb1c7a6901.slice. Sep 12 17:11:37.475155 containerd[1529]: time="2025-09-12T17:11:37.475022871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c5d89797-pbtvp,Uid:8a929edb-eab6-4add-8f46-412e4de3f50e,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:37.485985 containerd[1529]: time="2025-09-12T17:11:37.485704146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d9fnv,Uid:f76edb24-7d91-4109-a1e0-6afc117a0f16,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:37.504430 containerd[1529]: time="2025-09-12T17:11:37.504382966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-86qks,Uid:75c7d3f4-2baa-4dc5-9d2b-3e31c0555321,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:37.504817 containerd[1529]: time="2025-09-12T17:11:37.504726487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jfdwz,Uid:3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:37.510666 containerd[1529]: time="2025-09-12T17:11:37.510621986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mh8hq,Uid:506ce59b-3fed-457d-9fb4-90edc9572ad2,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:37.520082 containerd[1529]: time="2025-09-12T17:11:37.519531655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-vlpk6,Uid:16e247a5-69ba-416d-911c-7b93c8ce7cdc,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:37.523036 containerd[1529]: time="2025-09-12T17:11:37.522887186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd58f486c-dc94g,Uid:b9569d1b-2f07-443a-86a0-aedb1c7a6901,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:37.644930 containerd[1529]: time="2025-09-12T17:11:37.644861182Z" level=error msg="Failed to destroy network for sandbox \"ba9614f01f66525fa29cbc674d51e311326c48d4a18aa521d9e5c4353d348472\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.651164 containerd[1529]: time="2025-09-12T17:11:37.651072122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-86qks,Uid:75c7d3f4-2baa-4dc5-9d2b-3e31c0555321,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba9614f01f66525fa29cbc674d51e311326c48d4a18aa521d9e5c4353d348472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.652904 kubelet[2670]: E0912 17:11:37.652730 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba9614f01f66525fa29cbc674d51e311326c48d4a18aa521d9e5c4353d348472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.652904 kubelet[2670]: E0912 17:11:37.652830 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba9614f01f66525fa29cbc674d51e311326c48d4a18aa521d9e5c4353d348472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57884765c9-86qks" Sep 12 17:11:37.652904 kubelet[2670]: E0912 17:11:37.652855 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba9614f01f66525fa29cbc674d51e311326c48d4a18aa521d9e5c4353d348472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57884765c9-86qks" Sep 12 17:11:37.653146 kubelet[2670]: E0912 17:11:37.652905 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57884765c9-86qks_calico-apiserver(75c7d3f4-2baa-4dc5-9d2b-3e31c0555321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57884765c9-86qks_calico-apiserver(75c7d3f4-2baa-4dc5-9d2b-3e31c0555321)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba9614f01f66525fa29cbc674d51e311326c48d4a18aa521d9e5c4353d348472\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57884765c9-86qks" podUID="75c7d3f4-2baa-4dc5-9d2b-3e31c0555321" Sep 12 17:11:37.667875 containerd[1529]: time="2025-09-12T17:11:37.667809936Z" level=error msg="Failed to destroy network for sandbox \"469867d35c85a42ac77270f5bcb25ad1a97d24b46129f9f3fccbc2f8aa0d7efc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.668291 containerd[1529]: time="2025-09-12T17:11:37.668045817Z" level=error msg="Failed to destroy network for sandbox \"d6544ee2640b4e873a4eca252885e15874f801469f257d3d6670df35a046bbb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.670752 containerd[1529]: time="2025-09-12T17:11:37.670692865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mh8hq,Uid:506ce59b-3fed-457d-9fb4-90edc9572ad2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"469867d35c85a42ac77270f5bcb25ad1a97d24b46129f9f3fccbc2f8aa0d7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.671747 kubelet[2670]: E0912 17:11:37.671698 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469867d35c85a42ac77270f5bcb25ad1a97d24b46129f9f3fccbc2f8aa0d7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.671846 kubelet[2670]: E0912 17:11:37.671761 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469867d35c85a42ac77270f5bcb25ad1a97d24b46129f9f3fccbc2f8aa0d7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mh8hq" Sep 12 17:11:37.671846 kubelet[2670]: E0912 17:11:37.671782 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469867d35c85a42ac77270f5bcb25ad1a97d24b46129f9f3fccbc2f8aa0d7efc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mh8hq" Sep 12 17:11:37.671906 kubelet[2670]: E0912 17:11:37.671823 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mh8hq_kube-system(506ce59b-3fed-457d-9fb4-90edc9572ad2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mh8hq_kube-system(506ce59b-3fed-457d-9fb4-90edc9572ad2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469867d35c85a42ac77270f5bcb25ad1a97d24b46129f9f3fccbc2f8aa0d7efc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mh8hq" podUID="506ce59b-3fed-457d-9fb4-90edc9572ad2" Sep 12 17:11:37.672075 containerd[1529]: time="2025-09-12T17:11:37.672031630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd58f486c-dc94g,Uid:b9569d1b-2f07-443a-86a0-aedb1c7a6901,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6544ee2640b4e873a4eca252885e15874f801469f257d3d6670df35a046bbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.672392 kubelet[2670]: E0912 17:11:37.672328 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6544ee2640b4e873a4eca252885e15874f801469f257d3d6670df35a046bbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.672468 kubelet[2670]: E0912 17:11:37.672414 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6544ee2640b4e873a4eca252885e15874f801469f257d3d6670df35a046bbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd58f486c-dc94g" Sep 12 17:11:37.672468 kubelet[2670]: E0912 17:11:37.672432 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6544ee2640b4e873a4eca252885e15874f801469f257d3d6670df35a046bbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bd58f486c-dc94g" Sep 12 17:11:37.672526 kubelet[2670]: E0912 17:11:37.672474 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bd58f486c-dc94g_calico-system(b9569d1b-2f07-443a-86a0-aedb1c7a6901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bd58f486c-dc94g_calico-system(b9569d1b-2f07-443a-86a0-aedb1c7a6901)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6544ee2640b4e873a4eca252885e15874f801469f257d3d6670df35a046bbb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bd58f486c-dc94g" podUID="b9569d1b-2f07-443a-86a0-aedb1c7a6901" Sep 12 17:11:37.684403 containerd[1529]: time="2025-09-12T17:11:37.684352030Z" level=error msg="Failed to destroy network for sandbox \"77bb60de2c63e27cb236cf11c071a6c1d4f2ebbb63fb3fbc1d8498d5bb50276e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.687445 containerd[1529]: time="2025-09-12T17:11:37.687314119Z" level=error msg="Failed to destroy network for sandbox \"9ba0eefc0fba02a0deec2c456f9fef571d886d2e1956801210402a1719b4081d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.689166 containerd[1529]: time="2025-09-12T17:11:37.689064285Z" level=error msg="Failed to destroy network for sandbox \"9f5d8da6628b62c6fbe7f1476fba30d2a7bcb3fbbe94a7990ba1224b8166b5e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.691640 containerd[1529]: time="2025-09-12T17:11:37.691528173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d9fnv,Uid:f76edb24-7d91-4109-a1e0-6afc117a0f16,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77bb60de2c63e27cb236cf11c071a6c1d4f2ebbb63fb3fbc1d8498d5bb50276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.691824 kubelet[2670]: E0912 17:11:37.691782 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77bb60de2c63e27cb236cf11c071a6c1d4f2ebbb63fb3fbc1d8498d5bb50276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.692093 kubelet[2670]: E0912 17:11:37.691843 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77bb60de2c63e27cb236cf11c071a6c1d4f2ebbb63fb3fbc1d8498d5bb50276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d9fnv" Sep 12 17:11:37.692093 kubelet[2670]: E0912 17:11:37.691866 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77bb60de2c63e27cb236cf11c071a6c1d4f2ebbb63fb3fbc1d8498d5bb50276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d9fnv" Sep 12 17:11:37.692093 kubelet[2670]: E0912 17:11:37.691913 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-d9fnv_kube-system(f76edb24-7d91-4109-a1e0-6afc117a0f16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-d9fnv_kube-system(f76edb24-7d91-4109-a1e0-6afc117a0f16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77bb60de2c63e27cb236cf11c071a6c1d4f2ebbb63fb3fbc1d8498d5bb50276e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d9fnv" podUID="f76edb24-7d91-4109-a1e0-6afc117a0f16" Sep 12 17:11:37.693720 containerd[1529]: time="2025-09-12T17:11:37.692634137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-vlpk6,Uid:16e247a5-69ba-416d-911c-7b93c8ce7cdc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba0eefc0fba02a0deec2c456f9fef571d886d2e1956801210402a1719b4081d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.693720 containerd[1529]: time="2025-09-12T17:11:37.693642140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c5d89797-pbtvp,Uid:8a929edb-eab6-4add-8f46-412e4de3f50e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5d8da6628b62c6fbe7f1476fba30d2a7bcb3fbbe94a7990ba1224b8166b5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.693883 kubelet[2670]: E0912 17:11:37.692858 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba0eefc0fba02a0deec2c456f9fef571d886d2e1956801210402a1719b4081d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.693883 kubelet[2670]: E0912 17:11:37.692904 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba0eefc0fba02a0deec2c456f9fef571d886d2e1956801210402a1719b4081d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57884765c9-vlpk6" Sep 12 17:11:37.693883 kubelet[2670]: E0912 17:11:37.692926 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba0eefc0fba02a0deec2c456f9fef571d886d2e1956801210402a1719b4081d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57884765c9-vlpk6" Sep 12 17:11:37.693965 kubelet[2670]: E0912 17:11:37.692960 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57884765c9-vlpk6_calico-apiserver(16e247a5-69ba-416d-911c-7b93c8ce7cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57884765c9-vlpk6_calico-apiserver(16e247a5-69ba-416d-911c-7b93c8ce7cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ba0eefc0fba02a0deec2c456f9fef571d886d2e1956801210402a1719b4081d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57884765c9-vlpk6" podUID="16e247a5-69ba-416d-911c-7b93c8ce7cdc" Sep 12 17:11:37.693965 kubelet[2670]: E0912 17:11:37.693814 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5d8da6628b62c6fbe7f1476fba30d2a7bcb3fbbe94a7990ba1224b8166b5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.693965 kubelet[2670]: E0912 17:11:37.693840 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5d8da6628b62c6fbe7f1476fba30d2a7bcb3fbbe94a7990ba1224b8166b5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76c5d89797-pbtvp" Sep 12 17:11:37.694063 kubelet[2670]: E0912 17:11:37.693855 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f5d8da6628b62c6fbe7f1476fba30d2a7bcb3fbbe94a7990ba1224b8166b5e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76c5d89797-pbtvp" Sep 12 17:11:37.694063 kubelet[2670]: E0912 17:11:37.693885 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76c5d89797-pbtvp_calico-system(8a929edb-eab6-4add-8f46-412e4de3f50e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76c5d89797-pbtvp_calico-system(8a929edb-eab6-4add-8f46-412e4de3f50e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f5d8da6628b62c6fbe7f1476fba30d2a7bcb3fbbe94a7990ba1224b8166b5e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76c5d89797-pbtvp" podUID="8a929edb-eab6-4add-8f46-412e4de3f50e" Sep 12 17:11:37.696903 containerd[1529]: time="2025-09-12T17:11:37.696852710Z" level=error msg="Failed to destroy network for sandbox \"95c3b071eabb9215ed262a6eea434e25529820644b4c36bc5525f74acbf72b07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.699174 containerd[1529]: time="2025-09-12T17:11:37.698736676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jfdwz,Uid:3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c3b071eabb9215ed262a6eea434e25529820644b4c36bc5525f74acbf72b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.699289 kubelet[2670]: E0912 17:11:37.698948 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c3b071eabb9215ed262a6eea434e25529820644b4c36bc5525f74acbf72b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:37.699289 kubelet[2670]: E0912 17:11:37.699034 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c3b071eabb9215ed262a6eea434e25529820644b4c36bc5525f74acbf72b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-jfdwz" Sep 12 17:11:37.699289 kubelet[2670]: E0912 17:11:37.699056 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95c3b071eabb9215ed262a6eea434e25529820644b4c36bc5525f74acbf72b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-jfdwz" Sep 12 17:11:37.699428 kubelet[2670]: E0912 17:11:37.699139 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-jfdwz_calico-system(3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-jfdwz_calico-system(3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95c3b071eabb9215ed262a6eea434e25529820644b4c36bc5525f74acbf72b07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-jfdwz" podUID="3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8" Sep 12 17:11:37.830840 containerd[1529]: time="2025-09-12T17:11:37.830800145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:11:38.320273 systemd[1]: run-netns-cni\x2d773b1c4a\x2d7e72\x2dc7e5\x2d7d13\x2dad2a87308b0c.mount: Deactivated successfully. Sep 12 17:11:38.320359 systemd[1]: run-netns-cni\x2d59992ec5\x2d9b51\x2de045\x2dd895\x2d7abc01346d07.mount: Deactivated successfully. Sep 12 17:11:38.320407 systemd[1]: run-netns-cni\x2dcb02c3a7\x2de3a1\x2dace4\x2d3f9b\x2ddfbb4c6e1c0a.mount: Deactivated successfully. Sep 12 17:11:38.320454 systemd[1]: run-netns-cni\x2d632d35e5\x2d3fae\x2d4a85\x2d376b\x2db8318f5a6dff.mount: Deactivated successfully. Sep 12 17:11:38.700454 systemd[1]: Created slice kubepods-besteffort-pod8d5924f1_1a9c_4d4b_97cc_dc02a03a8b11.slice - libcontainer container kubepods-besteffort-pod8d5924f1_1a9c_4d4b_97cc_dc02a03a8b11.slice. Sep 12 17:11:38.708341 containerd[1529]: time="2025-09-12T17:11:38.706013080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkw85,Uid:8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:38.776509 containerd[1529]: time="2025-09-12T17:11:38.776371094Z" level=error msg="Failed to destroy network for sandbox \"ebd4e8b3f595c9e0cfc4a4525423a820e0c96ef211cdf7e70c53954b16850fd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:38.778100 containerd[1529]: time="2025-09-12T17:11:38.777975739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkw85,Uid:8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd4e8b3f595c9e0cfc4a4525423a820e0c96ef211cdf7e70c53954b16850fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:38.778496 systemd[1]: run-netns-cni\x2d482cd86d\x2df08d\x2d19f3\x2d68ef\x2d511350a1dbbb.mount: Deactivated successfully. Sep 12 17:11:38.779003 kubelet[2670]: E0912 17:11:38.778509 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd4e8b3f595c9e0cfc4a4525423a820e0c96ef211cdf7e70c53954b16850fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:38.779003 kubelet[2670]: E0912 17:11:38.778566 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd4e8b3f595c9e0cfc4a4525423a820e0c96ef211cdf7e70c53954b16850fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:38.779003 kubelet[2670]: E0912 17:11:38.778587 2670 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd4e8b3f595c9e0cfc4a4525423a820e0c96ef211cdf7e70c53954b16850fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkw85" Sep 12 17:11:38.779477 kubelet[2670]: E0912 17:11:38.778625 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nkw85_calico-system(8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nkw85_calico-system(8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebd4e8b3f595c9e0cfc4a4525423a820e0c96ef211cdf7e70c53954b16850fd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nkw85" podUID="8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11" Sep 12 17:11:41.329881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953571998.mount: Deactivated successfully. Sep 12 17:11:41.442783 containerd[1529]: time="2025-09-12T17:11:41.442211685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:41.444358 containerd[1529]: time="2025-09-12T17:11:41.444287810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:11:41.445458 containerd[1529]: time="2025-09-12T17:11:41.445432013Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:41.448901 containerd[1529]: time="2025-09-12T17:11:41.448841461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:41.449591 containerd[1529]: time="2025-09-12T17:11:41.449382223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.618542238s" Sep 12 17:11:41.449591 containerd[1529]: time="2025-09-12T17:11:41.449416823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:11:41.473254 containerd[1529]: time="2025-09-12T17:11:41.473204842Z" level=info msg="CreateContainer within sandbox \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:11:41.583587 containerd[1529]: time="2025-09-12T17:11:41.583470759Z" level=info msg="Container 2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:41.589758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146674682.mount: Deactivated successfully. Sep 12 17:11:41.641921 containerd[1529]: time="2025-09-12T17:11:41.641857945Z" level=info msg="CreateContainer within sandbox \"6f74f1bf5079964b456cffbe6c6b615c03961df189c0002cc63d683cb7e81d7b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd\"" Sep 12 17:11:41.643836 containerd[1529]: time="2025-09-12T17:11:41.643804110Z" level=info msg="StartContainer for \"2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd\"" Sep 12 17:11:41.645573 containerd[1529]: time="2025-09-12T17:11:41.645543314Z" level=info msg="connecting to shim 2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd" address="unix:///run/containerd/s/a483080ab196761ee2640addf93710438cd8d487d8f2ac3d2af9a2c86552d057" protocol=ttrpc version=3 Sep 12 17:11:41.686361 systemd[1]: Started cri-containerd-2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd.scope - libcontainer container 2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd. Sep 12 17:11:41.732971 containerd[1529]: time="2025-09-12T17:11:41.732858413Z" level=info msg="StartContainer for \"2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd\" returns successfully" Sep 12 17:11:41.871046 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:11:41.871172 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:11:41.881418 kubelet[2670]: I0912 17:11:41.881329 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-96c9d" podStartSLOduration=1.716691135 podStartE2EDuration="13.881312145s" podCreationTimestamp="2025-09-12 17:11:28 +0000 UTC" firstStartedPulling="2025-09-12 17:11:29.285988936 +0000 UTC m=+21.705163906" lastFinishedPulling="2025-09-12 17:11:41.450609906 +0000 UTC m=+33.869784916" observedRunningTime="2025-09-12 17:11:41.878321537 +0000 UTC m=+34.297496547" watchObservedRunningTime="2025-09-12 17:11:41.881312145 +0000 UTC m=+34.300487155" Sep 12 17:11:42.098670 kubelet[2670]: I0912 17:11:42.098305 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-backend-key-pair\") pod \"8a929edb-eab6-4add-8f46-412e4de3f50e\" (UID: \"8a929edb-eab6-4add-8f46-412e4de3f50e\") " Sep 12 17:11:42.098670 kubelet[2670]: I0912 17:11:42.098370 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxwlh\" (UniqueName: \"kubernetes.io/projected/8a929edb-eab6-4add-8f46-412e4de3f50e-kube-api-access-gxwlh\") pod \"8a929edb-eab6-4add-8f46-412e4de3f50e\" (UID: \"8a929edb-eab6-4add-8f46-412e4de3f50e\") " Sep 12 17:11:42.098670 kubelet[2670]: I0912 17:11:42.098397 2670 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-ca-bundle\") pod \"8a929edb-eab6-4add-8f46-412e4de3f50e\" (UID: \"8a929edb-eab6-4add-8f46-412e4de3f50e\") " Sep 12 17:11:42.115915 kubelet[2670]: I0912 17:11:42.115865 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8a929edb-eab6-4add-8f46-412e4de3f50e" (UID: "8a929edb-eab6-4add-8f46-412e4de3f50e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:11:42.116481 kubelet[2670]: I0912 17:11:42.116439 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a929edb-eab6-4add-8f46-412e4de3f50e-kube-api-access-gxwlh" (OuterVolumeSpecName: "kube-api-access-gxwlh") pod "8a929edb-eab6-4add-8f46-412e4de3f50e" (UID: "8a929edb-eab6-4add-8f46-412e4de3f50e"). InnerVolumeSpecName "kube-api-access-gxwlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:11:42.123954 kubelet[2670]: I0912 17:11:42.123895 2670 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8a929edb-eab6-4add-8f46-412e4de3f50e" (UID: "8a929edb-eab6-4add-8f46-412e4de3f50e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:11:42.199043 kubelet[2670]: I0912 17:11:42.198584 2670 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:42.199043 kubelet[2670]: I0912 17:11:42.198626 2670 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a929edb-eab6-4add-8f46-412e4de3f50e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:42.199043 kubelet[2670]: I0912 17:11:42.198636 2670 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxwlh\" (UniqueName: \"kubernetes.io/projected/8a929edb-eab6-4add-8f46-412e4de3f50e-kube-api-access-gxwlh\") on node \"localhost\" DevicePath \"\"" Sep 12 17:11:42.328869 systemd[1]: var-lib-kubelet-pods-8a929edb\x2deab6\x2d4add\x2d8f46\x2d412e4de3f50e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxwlh.mount: Deactivated successfully. Sep 12 17:11:42.328962 systemd[1]: var-lib-kubelet-pods-8a929edb\x2deab6\x2d4add\x2d8f46\x2d412e4de3f50e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:11:42.852962 kubelet[2670]: I0912 17:11:42.852930 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:42.857452 systemd[1]: Removed slice kubepods-besteffort-pod8a929edb_eab6_4add_8f46_412e4de3f50e.slice - libcontainer container kubepods-besteffort-pod8a929edb_eab6_4add_8f46_412e4de3f50e.slice. Sep 12 17:11:42.916796 systemd[1]: Created slice kubepods-besteffort-pod5e30fe07_d9ef_42fc_8826_2e7455aa0902.slice - libcontainer container kubepods-besteffort-pod5e30fe07_d9ef_42fc_8826_2e7455aa0902.slice. Sep 12 17:11:43.003486 kubelet[2670]: I0912 17:11:43.003424 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e30fe07-d9ef-42fc-8826-2e7455aa0902-whisker-backend-key-pair\") pod \"whisker-77c958dd56-z2xt6\" (UID: \"5e30fe07-d9ef-42fc-8826-2e7455aa0902\") " pod="calico-system/whisker-77c958dd56-z2xt6" Sep 12 17:11:43.003486 kubelet[2670]: I0912 17:11:43.003482 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7nl\" (UniqueName: \"kubernetes.io/projected/5e30fe07-d9ef-42fc-8826-2e7455aa0902-kube-api-access-jc7nl\") pod \"whisker-77c958dd56-z2xt6\" (UID: \"5e30fe07-d9ef-42fc-8826-2e7455aa0902\") " pod="calico-system/whisker-77c958dd56-z2xt6" Sep 12 17:11:43.003880 kubelet[2670]: I0912 17:11:43.003507 2670 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e30fe07-d9ef-42fc-8826-2e7455aa0902-whisker-ca-bundle\") pod \"whisker-77c958dd56-z2xt6\" (UID: \"5e30fe07-d9ef-42fc-8826-2e7455aa0902\") " pod="calico-system/whisker-77c958dd56-z2xt6" Sep 12 17:11:43.222320 containerd[1529]: time="2025-09-12T17:11:43.221866519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c958dd56-z2xt6,Uid:5e30fe07-d9ef-42fc-8826-2e7455aa0902,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:43.485958 systemd-networkd[1449]: calif5ddfd9a457: Link UP Sep 12 17:11:43.486348 systemd-networkd[1449]: calif5ddfd9a457: Gained carrier Sep 12 17:11:43.500941 containerd[1529]: 2025-09-12 17:11:43.288 [INFO][3930] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:43.500941 containerd[1529]: 2025-09-12 17:11:43.347 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77c958dd56--z2xt6-eth0 whisker-77c958dd56- calico-system 5e30fe07-d9ef-42fc-8826-2e7455aa0902 898 0 2025-09-12 17:11:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77c958dd56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77c958dd56-z2xt6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif5ddfd9a457 [] [] }} ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-" Sep 12 17:11:43.500941 containerd[1529]: 2025-09-12 17:11:43.348 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.500941 containerd[1529]: 2025-09-12 17:11:43.437 [INFO][3965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" HandleID="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Workload="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.437 [INFO][3965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" HandleID="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Workload="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000139a90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77c958dd56-z2xt6", "timestamp":"2025-09-12 17:11:43.437504594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.437 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.437 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.438 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.449 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" host="localhost" Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.455 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.460 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.462 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.465 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:43.501273 containerd[1529]: 2025-09-12 17:11:43.465 [INFO][3965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" host="localhost" Sep 12 17:11:43.501474 containerd[1529]: 2025-09-12 17:11:43.466 [INFO][3965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e Sep 12 17:11:43.501474 containerd[1529]: 2025-09-12 17:11:43.471 [INFO][3965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" host="localhost" Sep 12 17:11:43.501474 containerd[1529]: 2025-09-12 17:11:43.476 [INFO][3965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" host="localhost" Sep 12 17:11:43.501474 containerd[1529]: 2025-09-12 17:11:43.476 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" host="localhost" Sep 12 17:11:43.501474 containerd[1529]: 2025-09-12 17:11:43.476 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:43.501474 containerd[1529]: 2025-09-12 17:11:43.476 [INFO][3965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" HandleID="k8s-pod-network.706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Workload="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.501586 containerd[1529]: 2025-09-12 17:11:43.479 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c958dd56--z2xt6-eth0", GenerateName:"whisker-77c958dd56-", Namespace:"calico-system", SelfLink:"", UID:"5e30fe07-d9ef-42fc-8826-2e7455aa0902", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c958dd56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77c958dd56-z2xt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif5ddfd9a457", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.501586 containerd[1529]: 2025-09-12 17:11:43.479 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.501755 containerd[1529]: 2025-09-12 17:11:43.479 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5ddfd9a457 ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.501755 containerd[1529]: 2025-09-12 17:11:43.486 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.501803 containerd[1529]: 2025-09-12 17:11:43.486 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c958dd56--z2xt6-eth0", GenerateName:"whisker-77c958dd56-", Namespace:"calico-system", SelfLink:"", UID:"5e30fe07-d9ef-42fc-8826-2e7455aa0902", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c958dd56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e", Pod:"whisker-77c958dd56-z2xt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif5ddfd9a457", MAC:"06:11:3d:74:0d:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.501853 containerd[1529]: 2025-09-12 17:11:43.497 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" Namespace="calico-system" Pod="whisker-77c958dd56-z2xt6" WorkloadEndpoint="localhost-k8s-whisker--77c958dd56--z2xt6-eth0" Sep 12 17:11:43.527156 containerd[1529]: time="2025-09-12T17:11:43.526631030Z" level=info msg="connecting to shim 706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e" address="unix:///run/containerd/s/2ab700fa26c92893e8cd59c3628d799754bbd56048e0dd9fad037aa5704ba4ff" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:43.559289 systemd[1]: Started cri-containerd-706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e.scope - libcontainer container 706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e. Sep 12 17:11:43.593318 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:43.639883 containerd[1529]: time="2025-09-12T17:11:43.639828520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c958dd56-z2xt6,Uid:5e30fe07-d9ef-42fc-8826-2e7455aa0902,Namespace:calico-system,Attempt:0,} returns sandbox id \"706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e\"" Sep 12 17:11:43.641553 containerd[1529]: time="2025-09-12T17:11:43.641504563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:11:43.686988 kubelet[2670]: I0912 17:11:43.686939 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a929edb-eab6-4add-8f46-412e4de3f50e" path="/var/lib/kubelet/pods/8a929edb-eab6-4add-8f46-412e4de3f50e/volumes" Sep 12 17:11:44.564919 containerd[1529]: time="2025-09-12T17:11:44.564861638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:11:44.568891 containerd[1529]: time="2025-09-12T17:11:44.568839927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 927.297004ms" Sep 12 17:11:44.568891 containerd[1529]: time="2025-09-12T17:11:44.568889287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:11:44.572827 containerd[1529]: time="2025-09-12T17:11:44.572788015Z" level=info msg="CreateContainer within sandbox \"706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:11:44.577746 containerd[1529]: time="2025-09-12T17:11:44.577693665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:44.578438 containerd[1529]: time="2025-09-12T17:11:44.578413026Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:44.578990 containerd[1529]: time="2025-09-12T17:11:44.578950148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:44.580148 containerd[1529]: time="2025-09-12T17:11:44.579545389Z" level=info msg="Container d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:44.587042 containerd[1529]: time="2025-09-12T17:11:44.586981844Z" level=info msg="CreateContainer within sandbox \"706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245\"" Sep 12 17:11:44.587513 containerd[1529]: time="2025-09-12T17:11:44.587475205Z" level=info msg="StartContainer for \"d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245\"" Sep 12 17:11:44.589013 containerd[1529]: time="2025-09-12T17:11:44.588980728Z" level=info msg="connecting to shim d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245" address="unix:///run/containerd/s/2ab700fa26c92893e8cd59c3628d799754bbd56048e0dd9fad037aa5704ba4ff" protocol=ttrpc version=3 Sep 12 17:11:44.612390 systemd[1]: Started cri-containerd-d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245.scope - libcontainer container d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245. Sep 12 17:11:44.649427 containerd[1529]: time="2025-09-12T17:11:44.649389493Z" level=info msg="StartContainer for \"d56031e0d153fba1a657dda1f0d44ae45f1a03d7030ddf99588e2de8a9490245\" returns successfully" Sep 12 17:11:44.651711 containerd[1529]: time="2025-09-12T17:11:44.651679058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:11:45.371325 systemd-networkd[1449]: calif5ddfd9a457: Gained IPv6LL Sep 12 17:11:45.973514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346776282.mount: Deactivated successfully. Sep 12 17:11:45.991721 containerd[1529]: time="2025-09-12T17:11:45.991675816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:45.992754 containerd[1529]: time="2025-09-12T17:11:45.992568057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:11:45.994097 containerd[1529]: time="2025-09-12T17:11:45.994051500Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:45.996407 containerd[1529]: time="2025-09-12T17:11:45.996363065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:45.997418 containerd[1529]: time="2025-09-12T17:11:45.997387227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.345670129s" Sep 12 17:11:45.997466 containerd[1529]: time="2025-09-12T17:11:45.997422467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:11:46.000716 containerd[1529]: time="2025-09-12T17:11:46.000682513Z" level=info msg="CreateContainer within sandbox \"706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:11:46.010417 containerd[1529]: time="2025-09-12T17:11:46.010271051Z" level=info msg="Container f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:46.022748 containerd[1529]: time="2025-09-12T17:11:46.022695753Z" level=info msg="CreateContainer within sandbox \"706efbb961418611e29c23eb86f6940d7c02a2e4cac51322e584ae4a3fc1fb8e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd\"" Sep 12 17:11:46.024150 containerd[1529]: time="2025-09-12T17:11:46.023304274Z" level=info msg="StartContainer for \"f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd\"" Sep 12 17:11:46.024757 containerd[1529]: time="2025-09-12T17:11:46.024730237Z" level=info msg="connecting to shim f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd" address="unix:///run/containerd/s/2ab700fa26c92893e8cd59c3628d799754bbd56048e0dd9fad037aa5704ba4ff" protocol=ttrpc version=3 Sep 12 17:11:46.047324 systemd[1]: Started cri-containerd-f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd.scope - libcontainer container f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd. Sep 12 17:11:46.084170 containerd[1529]: time="2025-09-12T17:11:46.084110304Z" level=info msg="StartContainer for \"f303023eb010f7ae636dcbf0ad4c90b2d8f16c80feb4b4eaaa9805430d889bcd\" returns successfully" Sep 12 17:11:46.884389 kubelet[2670]: I0912 17:11:46.884306 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77c958dd56-z2xt6" podStartSLOduration=2.527259371 podStartE2EDuration="4.884282836s" podCreationTimestamp="2025-09-12 17:11:42 +0000 UTC" firstStartedPulling="2025-09-12 17:11:43.641263243 +0000 UTC m=+36.060438253" lastFinishedPulling="2025-09-12 17:11:45.998286708 +0000 UTC m=+38.417461718" observedRunningTime="2025-09-12 17:11:46.882877114 +0000 UTC m=+39.302052204" watchObservedRunningTime="2025-09-12 17:11:46.884282836 +0000 UTC m=+39.303457846" Sep 12 17:11:48.110916 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:56298.service - OpenSSH per-connection server daemon (10.0.0.1:56298). Sep 12 17:11:48.165512 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 56298 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:11:48.167350 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:48.172081 systemd-logind[1509]: New session 8 of user core. Sep 12 17:11:48.183386 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:11:48.349980 sshd[4219]: Connection closed by 10.0.0.1 port 56298 Sep 12 17:11:48.350362 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:48.354012 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:56298.service: Deactivated successfully. Sep 12 17:11:48.355827 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:11:48.356612 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:11:48.357783 systemd-logind[1509]: Removed session 8. Sep 12 17:11:48.685276 containerd[1529]: time="2025-09-12T17:11:48.685219041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d9fnv,Uid:f76edb24-7d91-4109-a1e0-6afc117a0f16,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:48.815389 systemd-networkd[1449]: cali19deeb34af9: Link UP Sep 12 17:11:48.816540 systemd-networkd[1449]: cali19deeb34af9: Gained carrier Sep 12 17:11:48.835876 containerd[1529]: 2025-09-12 17:11:48.713 [INFO][4243] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:48.835876 containerd[1529]: 2025-09-12 17:11:48.729 [INFO][4243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0 coredns-7c65d6cfc9- kube-system f76edb24-7d91-4109-a1e0-6afc117a0f16 836 0 2025-09-12 17:11:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-d9fnv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19deeb34af9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-" Sep 12 17:11:48.835876 containerd[1529]: 2025-09-12 17:11:48.729 [INFO][4243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.835876 containerd[1529]: 2025-09-12 17:11:48.756 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" HandleID="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Workload="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.756 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" HandleID="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Workload="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c39e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-d9fnv", "timestamp":"2025-09-12 17:11:48.756436954 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.756 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.756 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.756 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.768 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" host="localhost" Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.778 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.783 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.792 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.794 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:48.836148 containerd[1529]: 2025-09-12 17:11:48.794 [INFO][4263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" host="localhost" Sep 12 17:11:48.836363 containerd[1529]: 2025-09-12 17:11:48.797 [INFO][4263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c Sep 12 17:11:48.836363 containerd[1529]: 2025-09-12 17:11:48.802 [INFO][4263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" host="localhost" Sep 12 17:11:48.836363 containerd[1529]: 2025-09-12 17:11:48.807 [INFO][4263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" host="localhost" Sep 12 17:11:48.836363 containerd[1529]: 2025-09-12 17:11:48.807 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" host="localhost" Sep 12 17:11:48.836363 containerd[1529]: 2025-09-12 17:11:48.807 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:48.836363 containerd[1529]: 2025-09-12 17:11:48.807 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" HandleID="k8s-pod-network.9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Workload="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.836482 containerd[1529]: 2025-09-12 17:11:48.811 [INFO][4243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f76edb24-7d91-4109-a1e0-6afc117a0f16", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-d9fnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19deeb34af9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:48.836545 containerd[1529]: 2025-09-12 17:11:48.811 [INFO][4243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.836545 containerd[1529]: 2025-09-12 17:11:48.811 [INFO][4243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19deeb34af9 ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.836545 containerd[1529]: 2025-09-12 17:11:48.814 [INFO][4243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.836603 containerd[1529]: 2025-09-12 17:11:48.815 [INFO][4243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f76edb24-7d91-4109-a1e0-6afc117a0f16", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c", Pod:"coredns-7c65d6cfc9-d9fnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19deeb34af9", MAC:"32:b6:e6:9b:a0:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:48.836603 containerd[1529]: 2025-09-12 17:11:48.831 [INFO][4243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d9fnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d9fnv-eth0" Sep 12 17:11:48.871110 containerd[1529]: time="2025-09-12T17:11:48.871070857Z" level=info msg="connecting to shim 9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c" address="unix:///run/containerd/s/f88d51552957ab566da269e465267878b8a65133d736185f95cc233c3925ba31" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:48.906347 systemd[1]: Started cri-containerd-9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c.scope - libcontainer container 9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c. Sep 12 17:11:48.918833 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:48.941162 containerd[1529]: time="2025-09-12T17:11:48.940258767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d9fnv,Uid:f76edb24-7d91-4109-a1e0-6afc117a0f16,Namespace:kube-system,Attempt:0,} returns sandbox id \"9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c\"" Sep 12 17:11:48.943783 containerd[1529]: time="2025-09-12T17:11:48.943751413Z" level=info msg="CreateContainer within sandbox \"9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:48.963010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532618589.mount: Deactivated successfully. Sep 12 17:11:48.964205 containerd[1529]: time="2025-09-12T17:11:48.959801918Z" level=info msg="Container 00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:48.969796 containerd[1529]: time="2025-09-12T17:11:48.969756534Z" level=info msg="CreateContainer within sandbox \"9efe6473f3225fc7b91b3bb1436b4a3b65ece41af6ff53e93bd07deff034078c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8\"" Sep 12 17:11:48.970796 containerd[1529]: time="2025-09-12T17:11:48.970582856Z" level=info msg="StartContainer for \"00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8\"" Sep 12 17:11:48.972220 containerd[1529]: time="2025-09-12T17:11:48.972186538Z" level=info msg="connecting to shim 00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8" address="unix:///run/containerd/s/f88d51552957ab566da269e465267878b8a65133d736185f95cc233c3925ba31" protocol=ttrpc version=3 Sep 12 17:11:48.997331 systemd[1]: Started cri-containerd-00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8.scope - libcontainer container 00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8. Sep 12 17:11:49.024361 containerd[1529]: time="2025-09-12T17:11:49.024319579Z" level=info msg="StartContainer for \"00d6193dc43ae6b4b2d9b101480b01e653ebe0191422d7017e047f24d09618c8\" returns successfully" Sep 12 17:11:49.907632 kubelet[2670]: I0912 17:11:49.907548 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d9fnv" podStartSLOduration=36.907529379 podStartE2EDuration="36.907529379s" podCreationTimestamp="2025-09-12 17:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:49.907312099 +0000 UTC m=+42.326487109" watchObservedRunningTime="2025-09-12 17:11:49.907529379 +0000 UTC m=+42.326704389" Sep 12 17:11:50.685227 containerd[1529]: time="2025-09-12T17:11:50.685177198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-vlpk6,Uid:16e247a5-69ba-416d-911c-7b93c8ce7cdc,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:50.685583 containerd[1529]: time="2025-09-12T17:11:50.685181998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd58f486c-dc94g,Uid:b9569d1b-2f07-443a-86a0-aedb1c7a6901,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:50.747309 systemd-networkd[1449]: cali19deeb34af9: Gained IPv6LL Sep 12 17:11:50.873487 systemd-networkd[1449]: calif8acee5e83c: Link UP Sep 12 17:11:50.876335 systemd-networkd[1449]: calif8acee5e83c: Gained carrier Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.761 [INFO][4394] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.775 [INFO][4394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0 calico-apiserver-57884765c9- calico-apiserver 16e247a5-69ba-416d-911c-7b93c8ce7cdc 840 0 2025-09-12 17:11:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57884765c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57884765c9-vlpk6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8acee5e83c [] [] }} ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.776 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.808 [INFO][4423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" HandleID="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Workload="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.808 [INFO][4423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" HandleID="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Workload="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57884765c9-vlpk6", "timestamp":"2025-09-12 17:11:50.808515651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.808 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.808 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.808 [INFO][4423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.821 [INFO][4423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.826 [INFO][4423] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.834 [INFO][4423] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.836 [INFO][4423] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.839 [INFO][4423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.839 [INFO][4423] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.840 [INFO][4423] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.851 [INFO][4423] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.866 [INFO][4423] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.866 [INFO][4423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" host="localhost" Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.867 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:50.893331 containerd[1529]: 2025-09-12 17:11:50.867 [INFO][4423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" HandleID="k8s-pod-network.85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Workload="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.894032 containerd[1529]: 2025-09-12 17:11:50.871 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0", GenerateName:"calico-apiserver-57884765c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"16e247a5-69ba-416d-911c-7b93c8ce7cdc", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57884765c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57884765c9-vlpk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8acee5e83c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:50.894032 containerd[1529]: 2025-09-12 17:11:50.871 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.894032 containerd[1529]: 2025-09-12 17:11:50.871 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8acee5e83c ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.894032 containerd[1529]: 2025-09-12 17:11:50.876 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.894032 containerd[1529]: 2025-09-12 17:11:50.877 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0", GenerateName:"calico-apiserver-57884765c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"16e247a5-69ba-416d-911c-7b93c8ce7cdc", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57884765c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e", Pod:"calico-apiserver-57884765c9-vlpk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8acee5e83c", MAC:"1e:a5:44:07:a2:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:50.894032 containerd[1529]: 2025-09-12 17:11:50.889 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-vlpk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--vlpk6-eth0" Sep 12 17:11:50.940094 containerd[1529]: time="2025-09-12T17:11:50.939644635Z" level=info msg="connecting to shim 85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e" address="unix:///run/containerd/s/f7e1403664ca23073e45c1f78af2e3aae21970e0ce67e66c6cb5443e68fabad9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:50.978465 systemd-networkd[1449]: calia3cb83a363e: Link UP Sep 12 17:11:50.979384 systemd-networkd[1449]: calia3cb83a363e: Gained carrier Sep 12 17:11:50.981273 systemd[1]: Started cri-containerd-85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e.scope - libcontainer container 85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e. Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.768 [INFO][4406] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.785 [INFO][4406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0 calico-kube-controllers-5bd58f486c- calico-system b9569d1b-2f07-443a-86a0-aedb1c7a6901 831 0 2025-09-12 17:11:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bd58f486c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bd58f486c-dc94g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia3cb83a363e [] [] }} ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.785 [INFO][4406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.821 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" HandleID="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Workload="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.821 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" HandleID="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Workload="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bd58f486c-dc94g", "timestamp":"2025-09-12 17:11:50.821166149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.821 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.867 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.867 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.921 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.929 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.939 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.943 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.955 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.956 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.958 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130 Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.963 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.969 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.969 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" host="localhost" Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.969 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:51.006573 containerd[1529]: 2025-09-12 17:11:50.969 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" HandleID="k8s-pod-network.ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Workload="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.007098 containerd[1529]: 2025-09-12 17:11:50.972 [INFO][4406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0", GenerateName:"calico-kube-controllers-5bd58f486c-", Namespace:"calico-system", SelfLink:"", UID:"b9569d1b-2f07-443a-86a0-aedb1c7a6901", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd58f486c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bd58f486c-dc94g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3cb83a363e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.007098 containerd[1529]: 2025-09-12 17:11:50.972 [INFO][4406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.007098 containerd[1529]: 2025-09-12 17:11:50.972 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3cb83a363e ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.007098 containerd[1529]: 2025-09-12 17:11:50.980 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.007098 containerd[1529]: 2025-09-12 17:11:50.980 [INFO][4406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0", GenerateName:"calico-kube-controllers-5bd58f486c-", Namespace:"calico-system", SelfLink:"", UID:"b9569d1b-2f07-443a-86a0-aedb1c7a6901", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bd58f486c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130", Pod:"calico-kube-controllers-5bd58f486c-dc94g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3cb83a363e", MAC:"fa:66:27:9d:08:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.007098 containerd[1529]: 2025-09-12 17:11:50.995 [INFO][4406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" Namespace="calico-system" Pod="calico-kube-controllers-5bd58f486c-dc94g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bd58f486c--dc94g-eth0" Sep 12 17:11:51.011090 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:51.037951 containerd[1529]: time="2025-09-12T17:11:51.037858089Z" level=info msg="connecting to shim ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130" address="unix:///run/containerd/s/ade64b9707850f9b6dfdf8af540ca5c2665f07cbadf9ab26c9c696df6f0e7e1c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:51.040181 containerd[1529]: time="2025-09-12T17:11:51.039958812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-vlpk6,Uid:16e247a5-69ba-416d-911c-7b93c8ce7cdc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e\"" Sep 12 17:11:51.041946 containerd[1529]: time="2025-09-12T17:11:51.041901775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:51.064308 systemd[1]: Started cri-containerd-ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130.scope - libcontainer container ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130. Sep 12 17:11:51.075498 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:51.097412 containerd[1529]: time="2025-09-12T17:11:51.097364607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bd58f486c-dc94g,Uid:b9569d1b-2f07-443a-86a0-aedb1c7a6901,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130\"" Sep 12 17:11:51.686423 containerd[1529]: time="2025-09-12T17:11:51.686193901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-86qks,Uid:75c7d3f4-2baa-4dc5-9d2b-3e31c0555321,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:51.686423 containerd[1529]: time="2025-09-12T17:11:51.686415941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkw85,Uid:8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:51.686870 containerd[1529]: time="2025-09-12T17:11:51.686314821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mh8hq,Uid:506ce59b-3fed-457d-9fb4-90edc9572ad2,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:51.835371 systemd-networkd[1449]: cali814f52e3ffc: Link UP Sep 12 17:11:51.835527 systemd-networkd[1449]: cali814f52e3ffc: Gained carrier Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.722 [INFO][4577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.744 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nkw85-eth0 csi-node-driver- calico-system 8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11 709 0 2025-09-12 17:11:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nkw85 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali814f52e3ffc [] [] }} ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.744 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.783 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" HandleID="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Workload="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.784 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" HandleID="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Workload="localhost-k8s-csi--node--driver--nkw85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nkw85", "timestamp":"2025-09-12 17:11:51.783844069 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.784 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.784 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.784 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.797 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.803 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.810 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.812 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.815 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.815 [INFO][4622] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.817 [INFO][4622] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4 Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.821 [INFO][4622] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.830 [INFO][4622] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.830 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" host="localhost" Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.830 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:51.851311 containerd[1529]: 2025-09-12 17:11:51.830 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" HandleID="k8s-pod-network.7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Workload="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.851854 containerd[1529]: 2025-09-12 17:11:51.832 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nkw85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nkw85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali814f52e3ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.851854 containerd[1529]: 2025-09-12 17:11:51.833 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.851854 containerd[1529]: 2025-09-12 17:11:51.833 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali814f52e3ffc ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.851854 containerd[1529]: 2025-09-12 17:11:51.834 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.851854 containerd[1529]: 2025-09-12 17:11:51.835 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nkw85-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4", Pod:"csi-node-driver-nkw85", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali814f52e3ffc", MAC:"5e:22:f0:b2:f7:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.851854 containerd[1529]: 2025-09-12 17:11:51.847 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" Namespace="calico-system" Pod="csi-node-driver-nkw85" WorkloadEndpoint="localhost-k8s-csi--node--driver--nkw85-eth0" Sep 12 17:11:51.869659 containerd[1529]: time="2025-09-12T17:11:51.869606262Z" level=info msg="connecting to shim 7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4" address="unix:///run/containerd/s/1cd8cee0b4b0603963e0e399dc5b816aa13b7567ca575e2c2e17322fc84896d2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:51.902335 systemd[1]: Started cri-containerd-7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4.scope - libcontainer container 7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4. Sep 12 17:11:51.924871 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:51.957678 containerd[1529]: time="2025-09-12T17:11:51.956745216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkw85,Uid:8d5924f1-1a9c-4d4b-97cc-dc02a03a8b11,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4\"" Sep 12 17:11:51.976947 systemd-networkd[1449]: calib6b1f3784a3: Link UP Sep 12 17:11:51.978488 systemd-networkd[1449]: calib6b1f3784a3: Gained carrier Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.715 [INFO][4578] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.744 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57884765c9--86qks-eth0 calico-apiserver-57884765c9- calico-apiserver 75c7d3f4-2baa-4dc5-9d2b-3e31c0555321 838 0 2025-09-12 17:11:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57884765c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57884765c9-86qks eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6b1f3784a3 [] [] }} ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.744 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.801 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" HandleID="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Workload="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.801 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" HandleID="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Workload="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57884765c9-86qks", "timestamp":"2025-09-12 17:11:51.801635453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.801 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.830 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.830 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.905 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.919 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.926 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.930 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.942 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.942 [INFO][4628] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.950 [INFO][4628] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07 Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.958 [INFO][4628] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.967 [INFO][4628] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.968 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" host="localhost" Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.968 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:52.030141 containerd[1529]: 2025-09-12 17:11:51.968 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" HandleID="k8s-pod-network.0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Workload="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.030707 containerd[1529]: 2025-09-12 17:11:51.972 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57884765c9--86qks-eth0", GenerateName:"calico-apiserver-57884765c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"75c7d3f4-2baa-4dc5-9d2b-3e31c0555321", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57884765c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57884765c9-86qks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6b1f3784a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.030707 containerd[1529]: 2025-09-12 17:11:51.972 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.030707 containerd[1529]: 2025-09-12 17:11:51.972 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6b1f3784a3 ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.030707 containerd[1529]: 2025-09-12 17:11:51.980 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.030707 containerd[1529]: 2025-09-12 17:11:51.981 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57884765c9--86qks-eth0", GenerateName:"calico-apiserver-57884765c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"75c7d3f4-2baa-4dc5-9d2b-3e31c0555321", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57884765c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07", Pod:"calico-apiserver-57884765c9-86qks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6b1f3784a3", MAC:"d2:a1:46:8b:55:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.030707 containerd[1529]: 2025-09-12 17:11:52.017 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" Namespace="calico-apiserver" Pod="calico-apiserver-57884765c9-86qks" WorkloadEndpoint="localhost-k8s-calico--apiserver--57884765c9--86qks-eth0" Sep 12 17:11:52.078535 containerd[1529]: time="2025-09-12T17:11:52.078313210Z" level=info msg="connecting to shim 0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07" address="unix:///run/containerd/s/24e03dcfcac8cd361d22a66c09b00ea58636c5811b3df1e786131f2f31d67304" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:52.092509 systemd-networkd[1449]: cali3ef6b0f4e7a: Link UP Sep 12 17:11:52.093443 systemd-networkd[1449]: cali3ef6b0f4e7a: Gained carrier Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.745 [INFO][4600] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.766 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0 coredns-7c65d6cfc9- kube-system 506ce59b-3fed-457d-9fb4-90edc9572ad2 839 0 2025-09-12 17:11:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mh8hq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3ef6b0f4e7a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.766 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.809 [INFO][4636] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" HandleID="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Workload="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.809 [INFO][4636] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" HandleID="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Workload="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c1b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mh8hq", "timestamp":"2025-09-12 17:11:51.809277543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.809 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.968 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:51.968 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.001 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.022 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.032 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.037 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.041 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.041 [INFO][4636] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.048 [INFO][4636] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276 Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.056 [INFO][4636] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.076 [INFO][4636] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.076 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" host="localhost" Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.076 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:52.122193 containerd[1529]: 2025-09-12 17:11:52.076 [INFO][4636] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" HandleID="k8s-pod-network.3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Workload="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.123538 containerd[1529]: 2025-09-12 17:11:52.087 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"506ce59b-3fed-457d-9fb4-90edc9572ad2", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mh8hq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ef6b0f4e7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.123538 containerd[1529]: 2025-09-12 17:11:52.087 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.123538 containerd[1529]: 2025-09-12 17:11:52.087 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ef6b0f4e7a ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.123538 containerd[1529]: 2025-09-12 17:11:52.094 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.123538 containerd[1529]: 2025-09-12 17:11:52.097 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"506ce59b-3fed-457d-9fb4-90edc9572ad2", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276", Pod:"coredns-7c65d6cfc9-mh8hq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ef6b0f4e7a", MAC:"c2:e8:0a:ec:ad:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.123538 containerd[1529]: 2025-09-12 17:11:52.117 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mh8hq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mh8hq-eth0" Sep 12 17:11:52.139443 systemd[1]: Started cri-containerd-0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07.scope - libcontainer container 0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07. Sep 12 17:11:52.166951 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:52.195935 containerd[1529]: time="2025-09-12T17:11:52.195815075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57884765c9-86qks,Uid:75c7d3f4-2baa-4dc5-9d2b-3e31c0555321,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07\"" Sep 12 17:11:52.203018 containerd[1529]: time="2025-09-12T17:11:52.202844163Z" level=info msg="connecting to shim 3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276" address="unix:///run/containerd/s/5842391227730123984ed71e06a725e1512be3cd0b403270610b9e18d5bb8bb5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:52.228392 systemd[1]: Started cri-containerd-3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276.scope - libcontainer container 3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276. Sep 12 17:11:52.245003 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:52.272383 containerd[1529]: time="2025-09-12T17:11:52.272342969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mh8hq,Uid:506ce59b-3fed-457d-9fb4-90edc9572ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276\"" Sep 12 17:11:52.278864 containerd[1529]: time="2025-09-12T17:11:52.278353416Z" level=info msg="CreateContainer within sandbox \"3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:52.305480 containerd[1529]: time="2025-09-12T17:11:52.305256169Z" level=info msg="Container d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:52.313889 containerd[1529]: time="2025-09-12T17:11:52.313853740Z" level=info msg="CreateContainer within sandbox \"3a033a3907908de53d48654d59e2e69b2cf6fc27778c333d99573faa4356d276\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc\"" Sep 12 17:11:52.314454 containerd[1529]: time="2025-09-12T17:11:52.314352621Z" level=info msg="StartContainer for \"d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc\"" Sep 12 17:11:52.315125 containerd[1529]: time="2025-09-12T17:11:52.315073781Z" level=info msg="connecting to shim d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc" address="unix:///run/containerd/s/5842391227730123984ed71e06a725e1512be3cd0b403270610b9e18d5bb8bb5" protocol=ttrpc version=3 Sep 12 17:11:52.341309 systemd[1]: Started cri-containerd-d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc.scope - libcontainer container d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc. Sep 12 17:11:52.351258 systemd-networkd[1449]: calia3cb83a363e: Gained IPv6LL Sep 12 17:11:52.379128 containerd[1529]: time="2025-09-12T17:11:52.379072820Z" level=info msg="StartContainer for \"d21e13ac3ac3f17425a48bbf7cdb520e857344538e42e26e177fc94c6f1225cc\" returns successfully" Sep 12 17:11:52.411357 systemd-networkd[1449]: calif8acee5e83c: Gained IPv6LL Sep 12 17:11:52.685293 containerd[1529]: time="2025-09-12T17:11:52.685247877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jfdwz,Uid:3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:52.716470 containerd[1529]: time="2025-09-12T17:11:52.716420236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:52.717601 containerd[1529]: time="2025-09-12T17:11:52.717572157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:11:52.718598 containerd[1529]: time="2025-09-12T17:11:52.718562878Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:52.721389 containerd[1529]: time="2025-09-12T17:11:52.721333162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:52.722728 containerd[1529]: time="2025-09-12T17:11:52.722683883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.680711628s" Sep 12 17:11:52.722838 containerd[1529]: time="2025-09-12T17:11:52.722820924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:52.724377 containerd[1529]: time="2025-09-12T17:11:52.724352646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:11:52.725316 containerd[1529]: time="2025-09-12T17:11:52.725253167Z" level=info msg="CreateContainer within sandbox \"85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:52.742173 containerd[1529]: time="2025-09-12T17:11:52.739829625Z" level=info msg="Container 0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:52.751982 containerd[1529]: time="2025-09-12T17:11:52.751926639Z" level=info msg="CreateContainer within sandbox \"85790662aae251451684a8a3587617836d5622579fafe8f25b5e97e852450b8e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1\"" Sep 12 17:11:52.752829 containerd[1529]: time="2025-09-12T17:11:52.752770721Z" level=info msg="StartContainer for \"0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1\"" Sep 12 17:11:52.754153 containerd[1529]: time="2025-09-12T17:11:52.754091282Z" level=info msg="connecting to shim 0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1" address="unix:///run/containerd/s/f7e1403664ca23073e45c1f78af2e3aae21970e0ce67e66c6cb5443e68fabad9" protocol=ttrpc version=3 Sep 12 17:11:52.781311 systemd[1]: Started cri-containerd-0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1.scope - libcontainer container 0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1. Sep 12 17:11:52.837057 systemd-networkd[1449]: calid65b0def04a: Link UP Sep 12 17:11:52.837654 systemd-networkd[1449]: calid65b0def04a: Gained carrier Sep 12 17:11:52.848962 containerd[1529]: time="2025-09-12T17:11:52.847966838Z" level=info msg="StartContainer for \"0e147b1c10cef3bcdf2b7eeffe94bac5996651eb8145cc0a125c7b1279edf7f1\" returns successfully" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.715 [INFO][4865] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.739 [INFO][4865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--jfdwz-eth0 goldmane-7988f88666- calico-system 3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8 837 0 2025-09-12 17:11:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-jfdwz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid65b0def04a [] [] }} ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.739 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.778 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" HandleID="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Workload="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.778 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" HandleID="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Workload="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-jfdwz", "timestamp":"2025-09-12 17:11:52.778476152 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.778 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.778 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.778 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.789 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.795 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.802 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.804 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.807 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.807 [INFO][4883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.809 [INFO][4883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8 Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.819 [INFO][4883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.830 [INFO][4883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.830 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" host="localhost" Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.830 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:52.864215 containerd[1529]: 2025-09-12 17:11:52.830 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" HandleID="k8s-pod-network.79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Workload="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.864734 containerd[1529]: 2025-09-12 17:11:52.834 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--jfdwz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-jfdwz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid65b0def04a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.864734 containerd[1529]: 2025-09-12 17:11:52.834 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.864734 containerd[1529]: 2025-09-12 17:11:52.834 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid65b0def04a ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.864734 containerd[1529]: 2025-09-12 17:11:52.838 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.864734 containerd[1529]: 2025-09-12 17:11:52.838 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--jfdwz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8", Pod:"goldmane-7988f88666-jfdwz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid65b0def04a", MAC:"1a:bb:e0:d3:7a:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.864734 containerd[1529]: 2025-09-12 17:11:52.861 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" Namespace="calico-system" Pod="goldmane-7988f88666-jfdwz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jfdwz-eth0" Sep 12 17:11:52.888834 containerd[1529]: time="2025-09-12T17:11:52.888778328Z" level=info msg="connecting to shim 79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8" address="unix:///run/containerd/s/dabd35d9cd23f390ce613ebddb1035bf7af787e00377943907057b6ebd635cd5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:11:52.925331 systemd[1]: Started cri-containerd-79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8.scope - libcontainer container 79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8. Sep 12 17:11:52.937646 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:11:52.959737 kubelet[2670]: I0912 17:11:52.959343 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mh8hq" podStartSLOduration=39.959323935 podStartE2EDuration="39.959323935s" podCreationTimestamp="2025-09-12 17:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:52.958548254 +0000 UTC m=+45.377723264" watchObservedRunningTime="2025-09-12 17:11:52.959323935 +0000 UTC m=+45.378498945" Sep 12 17:11:53.005129 containerd[1529]: time="2025-09-12T17:11:53.003493669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jfdwz,Uid:3b0f4287-82b3-4e4b-bfe7-2225c26f9fd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8\"" Sep 12 17:11:53.015796 kubelet[2670]: I0912 17:11:53.015735 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57884765c9-vlpk6" podStartSLOduration=27.333478492 podStartE2EDuration="29.015716323s" podCreationTimestamp="2025-09-12 17:11:24 +0000 UTC" firstStartedPulling="2025-09-12 17:11:51.041568814 +0000 UTC m=+43.460743784" lastFinishedPulling="2025-09-12 17:11:52.723806605 +0000 UTC m=+45.142981615" observedRunningTime="2025-09-12 17:11:53.015111323 +0000 UTC m=+45.434286333" watchObservedRunningTime="2025-09-12 17:11:53.015716323 +0000 UTC m=+45.434891333" Sep 12 17:11:53.051235 systemd-networkd[1449]: cali814f52e3ffc: Gained IPv6LL Sep 12 17:11:53.368315 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:42962.service - OpenSSH per-connection server daemon (10.0.0.1:42962). Sep 12 17:11:53.451436 sshd[5013]: Accepted publickey for core from 10.0.0.1 port 42962 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:11:53.455986 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:53.467385 systemd-logind[1509]: New session 9 of user core. Sep 12 17:11:53.476849 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:11:53.563508 systemd-networkd[1449]: calib6b1f3784a3: Gained IPv6LL Sep 12 17:11:53.899726 sshd[5016]: Connection closed by 10.0.0.1 port 42962 Sep 12 17:11:53.900093 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:53.907544 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:42962.service: Deactivated successfully. Sep 12 17:11:53.912529 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:11:53.917143 kubelet[2670]: I0912 17:11:53.917081 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:53.919039 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:11:53.921536 systemd-logind[1509]: Removed session 9. Sep 12 17:11:53.947267 systemd-networkd[1449]: calid65b0def04a: Gained IPv6LL Sep 12 17:11:54.011409 systemd-networkd[1449]: cali3ef6b0f4e7a: Gained IPv6LL Sep 12 17:11:54.599292 containerd[1529]: time="2025-09-12T17:11:54.599214108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.599906 containerd[1529]: time="2025-09-12T17:11:54.599883989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:11:54.601383 containerd[1529]: time="2025-09-12T17:11:54.601321111Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.603579 containerd[1529]: time="2025-09-12T17:11:54.603523753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.604091 containerd[1529]: time="2025-09-12T17:11:54.604057754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.879439948s" Sep 12 17:11:54.604158 containerd[1529]: time="2025-09-12T17:11:54.604093034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:11:54.605087 containerd[1529]: time="2025-09-12T17:11:54.605041675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:11:54.615507 containerd[1529]: time="2025-09-12T17:11:54.615442806Z" level=info msg="CreateContainer within sandbox \"ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:11:54.627683 containerd[1529]: time="2025-09-12T17:11:54.627632619Z" level=info msg="Container 2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:54.639726 containerd[1529]: time="2025-09-12T17:11:54.639669112Z" level=info msg="CreateContainer within sandbox \"ee7ca29084de1251072bd0377da1685381da811673a01c330268cb728a261130\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c\"" Sep 12 17:11:54.641298 containerd[1529]: time="2025-09-12T17:11:54.641239034Z" level=info msg="StartContainer for \"2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c\"" Sep 12 17:11:54.643618 containerd[1529]: time="2025-09-12T17:11:54.643577917Z" level=info msg="connecting to shim 2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c" address="unix:///run/containerd/s/ade64b9707850f9b6dfdf8af540ca5c2665f07cbadf9ab26c9c696df6f0e7e1c" protocol=ttrpc version=3 Sep 12 17:11:54.677390 systemd[1]: Started cri-containerd-2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c.scope - libcontainer container 2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c. Sep 12 17:11:54.751235 containerd[1529]: time="2025-09-12T17:11:54.751170073Z" level=info msg="StartContainer for \"2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c\" returns successfully" Sep 12 17:11:54.938202 kubelet[2670]: I0912 17:11:54.938035 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bd58f486c-dc94g" podStartSLOduration=22.431690289 podStartE2EDuration="25.938001235s" podCreationTimestamp="2025-09-12 17:11:29 +0000 UTC" firstStartedPulling="2025-09-12 17:11:51.098553209 +0000 UTC m=+43.517728219" lastFinishedPulling="2025-09-12 17:11:54.604864155 +0000 UTC m=+47.024039165" observedRunningTime="2025-09-12 17:11:54.937560155 +0000 UTC m=+47.356735165" watchObservedRunningTime="2025-09-12 17:11:54.938001235 +0000 UTC m=+47.357176245" Sep 12 17:11:55.930817 containerd[1529]: time="2025-09-12T17:11:55.930762847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:55.931761 containerd[1529]: time="2025-09-12T17:11:55.931722448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:11:55.933727 containerd[1529]: time="2025-09-12T17:11:55.933690490Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:55.936469 containerd[1529]: time="2025-09-12T17:11:55.936421173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:55.937205 containerd[1529]: time="2025-09-12T17:11:55.936869333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.331786418s" Sep 12 17:11:55.937205 containerd[1529]: time="2025-09-12T17:11:55.936900413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:11:55.937958 containerd[1529]: time="2025-09-12T17:11:55.937932734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:55.939141 containerd[1529]: time="2025-09-12T17:11:55.938682375Z" level=info msg="CreateContainer within sandbox \"7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:11:55.950151 containerd[1529]: time="2025-09-12T17:11:55.950093507Z" level=info msg="Container 64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:55.967214 containerd[1529]: time="2025-09-12T17:11:55.967172724Z" level=info msg="CreateContainer within sandbox \"7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c\"" Sep 12 17:11:55.968136 containerd[1529]: time="2025-09-12T17:11:55.967926965Z" level=info msg="StartContainer for \"64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c\"" Sep 12 17:11:55.969473 containerd[1529]: time="2025-09-12T17:11:55.969441366Z" level=info msg="connecting to shim 64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c" address="unix:///run/containerd/s/1cd8cee0b4b0603963e0e399dc5b816aa13b7567ca575e2c2e17322fc84896d2" protocol=ttrpc version=3 Sep 12 17:11:55.990369 systemd[1]: Started cri-containerd-64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c.scope - libcontainer container 64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c. Sep 12 17:11:56.024130 containerd[1529]: time="2025-09-12T17:11:56.023205580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c\" id:\"7d1d7cb23228ba0f1d17a84040dd808cf83a04acaa546d8dd3b9f361dbb7b39a\" pid:5165 exited_at:{seconds:1757697116 nanos:15924573}" Sep 12 17:11:56.042616 containerd[1529]: time="2025-09-12T17:11:56.042579078Z" level=info msg="StartContainer for \"64b5c718db37ad76779c8ce50e7aa57d56edb130b22bb7c57cc55db3ae65c71c\" returns successfully" Sep 12 17:11:56.310001 containerd[1529]: time="2025-09-12T17:11:56.309932972Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:56.310493 containerd[1529]: time="2025-09-12T17:11:56.310466173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:11:56.315304 containerd[1529]: time="2025-09-12T17:11:56.315013577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 377.049523ms" Sep 12 17:11:56.315304 containerd[1529]: time="2025-09-12T17:11:56.315052697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:56.318178 containerd[1529]: time="2025-09-12T17:11:56.317671900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:11:56.322241 containerd[1529]: time="2025-09-12T17:11:56.321737544Z" level=info msg="CreateContainer within sandbox \"0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:56.337080 containerd[1529]: time="2025-09-12T17:11:56.336764718Z" level=info msg="Container ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:56.347974 containerd[1529]: time="2025-09-12T17:11:56.347906768Z" level=info msg="CreateContainer within sandbox \"0e58d85205fce58cf90e160ab30980d3dc27deba31ec361c086ad47d4a2ece07\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94\"" Sep 12 17:11:56.349915 containerd[1529]: time="2025-09-12T17:11:56.349868250Z" level=info msg="StartContainer for \"ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94\"" Sep 12 17:11:56.353374 containerd[1529]: time="2025-09-12T17:11:56.353318934Z" level=info msg="connecting to shim ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94" address="unix:///run/containerd/s/24e03dcfcac8cd361d22a66c09b00ea58636c5811b3df1e786131f2f31d67304" protocol=ttrpc version=3 Sep 12 17:11:56.380294 systemd[1]: Started cri-containerd-ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94.scope - libcontainer container ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94. Sep 12 17:11:56.437026 containerd[1529]: time="2025-09-12T17:11:56.436953173Z" level=info msg="StartContainer for \"ebeb5a405a9df19854645ab8c2d7c571baaad71a2e46f7e1d830c3e7c2177b94\" returns successfully" Sep 12 17:11:56.951549 kubelet[2670]: I0912 17:11:56.951468 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57884765c9-86qks" podStartSLOduration=28.832738721 podStartE2EDuration="32.951450943s" podCreationTimestamp="2025-09-12 17:11:24 +0000 UTC" firstStartedPulling="2025-09-12 17:11:52.198203957 +0000 UTC m=+44.617378927" lastFinishedPulling="2025-09-12 17:11:56.316916139 +0000 UTC m=+48.736091149" observedRunningTime="2025-09-12 17:11:56.951175702 +0000 UTC m=+49.370350752" watchObservedRunningTime="2025-09-12 17:11:56.951450943 +0000 UTC m=+49.370625953" Sep 12 17:11:56.989972 kubelet[2670]: I0912 17:11:56.989898 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:57.082377 containerd[1529]: time="2025-09-12T17:11:57.082337582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd\" id:\"d1f2523f48849417ac7f0892bba931eb0453583cb072aaf802030fce60b8bdc6\" pid:5266 exit_status:1 exited_at:{seconds:1757697117 nanos:82055942}" Sep 12 17:11:57.162387 containerd[1529]: time="2025-09-12T17:11:57.162349854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b95f51942353d0597d9d560024a5e8efd797d01ad36697b80d4892bea5d1ffd\" id:\"a2e36392c390051b439d46300d2bfeda0549545e775ad95587fa0960edbe0701\" pid:5291 exit_status:1 exited_at:{seconds:1757697117 nanos:162039253}" Sep 12 17:11:57.932451 kubelet[2670]: I0912 17:11:57.932415 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:57.993465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925935767.mount: Deactivated successfully. Sep 12 17:11:58.344077 kubelet[2670]: I0912 17:11:58.344025 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:58.430321 containerd[1529]: time="2025-09-12T17:11:58.430277921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:58.431149 containerd[1529]: time="2025-09-12T17:11:58.430992921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:11:58.431955 containerd[1529]: time="2025-09-12T17:11:58.431922682Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:58.434171 containerd[1529]: time="2025-09-12T17:11:58.434136204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:58.434870 containerd[1529]: time="2025-09-12T17:11:58.434829645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.117126065s" Sep 12 17:11:58.434926 containerd[1529]: time="2025-09-12T17:11:58.434878885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:11:58.435865 containerd[1529]: time="2025-09-12T17:11:58.435812485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:11:58.437909 containerd[1529]: time="2025-09-12T17:11:58.437877527Z" level=info msg="CreateContainer within sandbox \"79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:11:58.446676 containerd[1529]: time="2025-09-12T17:11:58.445312213Z" level=info msg="Container 4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:11:58.454129 containerd[1529]: time="2025-09-12T17:11:58.454070261Z" level=info msg="CreateContainer within sandbox \"79be9b8c6133dd545180dd2f55f493e9982c2b2a2a43660c7fb5c29c068cebb8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25\"" Sep 12 17:11:58.454776 containerd[1529]: time="2025-09-12T17:11:58.454743821Z" level=info msg="StartContainer for \"4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25\"" Sep 12 17:11:58.456010 containerd[1529]: time="2025-09-12T17:11:58.455961982Z" level=info msg="connecting to shim 4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25" address="unix:///run/containerd/s/dabd35d9cd23f390ce613ebddb1035bf7af787e00377943907057b6ebd635cd5" protocol=ttrpc version=3 Sep 12 17:11:58.484286 systemd[1]: Started cri-containerd-4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25.scope - libcontainer container 4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25. Sep 12 17:11:58.547430 containerd[1529]: time="2025-09-12T17:11:58.547187698Z" level=info msg="StartContainer for \"4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25\" returns successfully" Sep 12 17:11:58.915734 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:42976.service - OpenSSH per-connection server daemon (10.0.0.1:42976). Sep 12 17:11:59.038888 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 42976 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:11:59.041182 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:59.045560 systemd-logind[1509]: New session 10 of user core. Sep 12 17:11:59.053291 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:11:59.296644 systemd-networkd[1449]: vxlan.calico: Link UP Sep 12 17:11:59.296651 systemd-networkd[1449]: vxlan.calico: Gained carrier Sep 12 17:11:59.298495 sshd[5447]: Connection closed by 10.0.0.1 port 42976 Sep 12 17:11:59.299029 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:59.316193 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:42976.service: Deactivated successfully. Sep 12 17:11:59.318539 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:11:59.319811 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:11:59.325068 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:42986.service - OpenSSH per-connection server daemon (10.0.0.1:42986). Sep 12 17:11:59.326493 systemd-logind[1509]: Removed session 10. Sep 12 17:11:59.385667 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 42986 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:11:59.387424 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:59.392210 systemd-logind[1509]: New session 11 of user core. Sep 12 17:11:59.403324 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:11:59.694064 sshd[5511]: Connection closed by 10.0.0.1 port 42986 Sep 12 17:11:59.695166 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:59.712819 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:42986.service: Deactivated successfully. Sep 12 17:11:59.719635 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:11:59.723377 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:11:59.726699 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:43002.service - OpenSSH per-connection server daemon (10.0.0.1:43002). Sep 12 17:11:59.727629 systemd-logind[1509]: Removed session 11. Sep 12 17:11:59.791038 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:11:59.792433 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:59.797298 systemd-logind[1509]: New session 12 of user core. Sep 12 17:11:59.802298 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:11:59.947985 sshd[5566]: Connection closed by 10.0.0.1 port 43002 Sep 12 17:11:59.948400 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:59.954705 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:43002.service: Deactivated successfully. Sep 12 17:11:59.960037 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:11:59.961195 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:11:59.962180 systemd-logind[1509]: Removed session 12. Sep 12 17:12:00.041170 containerd[1529]: time="2025-09-12T17:12:00.040916931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25\" id:\"874003b6be1c50e41c035a0c0cec8ecdba7eaf31c374ce01d23f6bdb21b79c44\" pid:5591 exit_status:1 exited_at:{seconds:1757697120 nanos:40142011}" Sep 12 17:12:00.429076 containerd[1529]: time="2025-09-12T17:12:00.429026856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:00.430261 containerd[1529]: time="2025-09-12T17:12:00.430228777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:12:00.432004 containerd[1529]: time="2025-09-12T17:12:00.431970379Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:00.441076 containerd[1529]: time="2025-09-12T17:12:00.441021585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:00.441869 containerd[1529]: time="2025-09-12T17:12:00.441772226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.005924461s" Sep 12 17:12:00.441869 containerd[1529]: time="2025-09-12T17:12:00.441810706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:12:00.452674 containerd[1529]: time="2025-09-12T17:12:00.452632434Z" level=info msg="CreateContainer within sandbox \"7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:12:00.539569 containerd[1529]: time="2025-09-12T17:12:00.539506218Z" level=info msg="Container 292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:12:00.553053 containerd[1529]: time="2025-09-12T17:12:00.553005988Z" level=info msg="CreateContainer within sandbox \"7d09cbb3abcad22e70813a4ba022ef9cc731c2af4b3bbd63a6e228b4571a23a4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f\"" Sep 12 17:12:00.553714 containerd[1529]: time="2025-09-12T17:12:00.553675668Z" level=info msg="StartContainer for \"292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f\"" Sep 12 17:12:00.555385 containerd[1529]: time="2025-09-12T17:12:00.555338589Z" level=info msg="connecting to shim 292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f" address="unix:///run/containerd/s/1cd8cee0b4b0603963e0e399dc5b816aa13b7567ca575e2c2e17322fc84896d2" protocol=ttrpc version=3 Sep 12 17:12:00.581325 systemd[1]: Started cri-containerd-292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f.scope - libcontainer container 292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f. Sep 12 17:12:00.623729 containerd[1529]: time="2025-09-12T17:12:00.623676480Z" level=info msg="StartContainer for \"292a1438cf48ad49c1d146730a516be8421c24b22894f32e758d3eafaf82f43f\" returns successfully" Sep 12 17:12:00.667264 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Sep 12 17:12:00.769436 kubelet[2670]: I0912 17:12:00.769288 2670 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:12:00.772077 kubelet[2670]: I0912 17:12:00.772018 2670 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:12:00.970198 kubelet[2670]: I0912 17:12:00.969365 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-jfdwz" podStartSLOduration=26.540726042 podStartE2EDuration="31.969349654s" podCreationTimestamp="2025-09-12 17:11:29 +0000 UTC" firstStartedPulling="2025-09-12 17:11:53.007045513 +0000 UTC m=+45.426220483" lastFinishedPulling="2025-09-12 17:11:58.435669085 +0000 UTC m=+50.854844095" observedRunningTime="2025-09-12 17:11:58.961975405 +0000 UTC m=+51.381150415" watchObservedRunningTime="2025-09-12 17:12:00.969349654 +0000 UTC m=+53.388524664" Sep 12 17:12:01.019700 containerd[1529]: time="2025-09-12T17:12:01.019514330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25\" id:\"8d849c747000e573b51d44607ae41b8156af054ea8b4895545287c041c22170b\" pid:5653 exited_at:{seconds:1757697121 nanos:19210089}" Sep 12 17:12:01.034581 kubelet[2670]: I0912 17:12:01.034075 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nkw85" podStartSLOduration=23.54491457 podStartE2EDuration="32.03405698s" podCreationTimestamp="2025-09-12 17:11:29 +0000 UTC" firstStartedPulling="2025-09-12 17:11:51.962005943 +0000 UTC m=+44.381180953" lastFinishedPulling="2025-09-12 17:12:00.451148353 +0000 UTC m=+52.870323363" observedRunningTime="2025-09-12 17:12:00.969619814 +0000 UTC m=+53.388795064" watchObservedRunningTime="2025-09-12 17:12:01.03405698 +0000 UTC m=+53.453231990" Sep 12 17:12:04.959826 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:49206.service - OpenSSH per-connection server daemon (10.0.0.1:49206). Sep 12 17:12:05.019759 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 49206 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:05.021352 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.025638 systemd-logind[1509]: New session 13 of user core. Sep 12 17:12:05.032318 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:12:05.229922 sshd[5678]: Connection closed by 10.0.0.1 port 49206 Sep 12 17:12:05.231481 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:05.240861 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:49206.service: Deactivated successfully. Sep 12 17:12:05.243529 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:12:05.246973 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:12:05.247203 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:49218.service - OpenSSH per-connection server daemon (10.0.0.1:49218). Sep 12 17:12:05.248978 systemd-logind[1509]: Removed session 13. Sep 12 17:12:05.312312 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 49218 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:05.313921 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.319029 systemd-logind[1509]: New session 14 of user core. Sep 12 17:12:05.336355 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:12:05.593394 sshd[5695]: Connection closed by 10.0.0.1 port 49218 Sep 12 17:12:05.593833 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:05.606958 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:49218.service: Deactivated successfully. Sep 12 17:12:05.610593 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:12:05.611683 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:12:05.616866 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:49222.service - OpenSSH per-connection server daemon (10.0.0.1:49222). Sep 12 17:12:05.619276 systemd-logind[1509]: Removed session 14. Sep 12 17:12:05.687430 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 49222 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:05.689020 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.693452 systemd-logind[1509]: New session 15 of user core. Sep 12 17:12:05.708395 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:12:07.299577 sshd[5709]: Connection closed by 10.0.0.1 port 49222 Sep 12 17:12:07.300448 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:07.310027 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:49222.service: Deactivated successfully. Sep 12 17:12:07.312551 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:12:07.313055 systemd[1]: session-15.scope: Consumed 575ms CPU time, 73.7M memory peak. Sep 12 17:12:07.315059 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:12:07.318083 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:49236.service - OpenSSH per-connection server daemon (10.0.0.1:49236). Sep 12 17:12:07.320800 systemd-logind[1509]: Removed session 15. Sep 12 17:12:07.376328 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 49236 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:07.377649 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:07.381817 systemd-logind[1509]: New session 16 of user core. Sep 12 17:12:07.395301 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:12:07.681647 sshd[5733]: Connection closed by 10.0.0.1 port 49236 Sep 12 17:12:07.681781 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:07.692291 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:49236.service: Deactivated successfully. Sep 12 17:12:07.696109 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:12:07.697862 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:12:07.702550 systemd-logind[1509]: Removed session 16. Sep 12 17:12:07.705348 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:49242.service - OpenSSH per-connection server daemon (10.0.0.1:49242). Sep 12 17:12:07.771781 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 49242 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:07.773166 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:07.777945 systemd-logind[1509]: New session 17 of user core. Sep 12 17:12:07.790316 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:12:07.931762 sshd[5750]: Connection closed by 10.0.0.1 port 49242 Sep 12 17:12:07.932008 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:07.937633 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:49242.service: Deactivated successfully. Sep 12 17:12:07.939929 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:12:07.942979 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:12:07.945287 systemd-logind[1509]: Removed session 17. Sep 12 17:12:09.869625 containerd[1529]: time="2025-09-12T17:12:09.869559520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c\" id:\"a1e22933f3a767361a7d64ed6b9e8a94e07d39edb879c96aa5ed64ba48896eee\" pid:5783 exited_at:{seconds:1757697129 nanos:869096320}" Sep 12 17:12:10.205181 containerd[1529]: time="2025-09-12T17:12:10.205036133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ce4dc7d897b6df15044969c8343852cf5c2090b077fc656f53b27c426d3f58c\" id:\"b249d98a695a4ecd928078a64bd8785e191e41a791d461338552d6568b261e61\" pid:5805 exited_at:{seconds:1757697130 nanos:204792532}" Sep 12 17:12:12.944370 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:42820.service - OpenSSH per-connection server daemon (10.0.0.1:42820). Sep 12 17:12:13.011189 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 42820 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:13.012594 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:13.017201 systemd-logind[1509]: New session 18 of user core. Sep 12 17:12:13.026287 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:12:13.153782 sshd[5823]: Connection closed by 10.0.0.1 port 42820 Sep 12 17:12:13.154148 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:13.158270 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:42820.service: Deactivated successfully. Sep 12 17:12:13.160892 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:12:13.161605 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:12:13.163110 systemd-logind[1509]: Removed session 18. Sep 12 17:12:15.854248 kubelet[2670]: I0912 17:12:15.854202 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:12:17.376811 containerd[1529]: time="2025-09-12T17:12:17.376766838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c35d135dcacf73e03ecac6f98ed835675774094a0e77c8bd2602c8045da2b25\" id:\"32efcef7531dc7a5abcb46239f008acfc0f878b7ba366803a350039ebe6f706d\" pid:5852 exited_at:{seconds:1757697137 nanos:375852099}" Sep 12 17:12:18.169898 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:42832.service - OpenSSH per-connection server daemon (10.0.0.1:42832). Sep 12 17:12:18.229195 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 42832 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:18.230579 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:18.236272 systemd-logind[1509]: New session 19 of user core. Sep 12 17:12:18.245345 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:12:18.380872 sshd[5867]: Connection closed by 10.0.0.1 port 42832 Sep 12 17:12:18.381362 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:18.389853 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:12:18.390379 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:42832.service: Deactivated successfully. Sep 12 17:12:18.393078 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:12:18.395817 systemd-logind[1509]: Removed session 19. Sep 12 17:12:23.398941 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Sep 12 17:12:23.459631 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:12:23.461075 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:23.466582 systemd-logind[1509]: New session 20 of user core. Sep 12 17:12:23.481356 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:12:23.633760 sshd[5895]: Connection closed by 10.0.0.1 port 34780 Sep 12 17:12:23.634319 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:23.638658 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:34780.service: Deactivated successfully. Sep 12 17:12:23.640536 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:12:23.641415 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:12:23.642956 systemd-logind[1509]: Removed session 20.